U.S. patent application number 11/461554 was filed with the patent office on 2007-08-09 for system and method for executing instructions utilizing a preferred slot alignment mechanism.
Invention is credited to Michael Karl Gschwind, Harm Peter Hofstee, Martin E. Hopkins, James Allan Kahle.
Application Number | 20070186077 11/461554 |
Document ID | / |
Family ID | 25219414 |
Filed Date | 2007-08-09 |
United States Patent
Application |
20070186077 |
Kind Code |
A1 |
Gschwind; Michael Karl ; et
al. |
August 9, 2007 |
System and Method for Executing Instructions Utilizing a Preferred
Slot Alignment Mechanism
Abstract
A system and method for executing instructions utilizing a
preferred slot alignment mechanism is presented. A processor
architecture uses a vector register file, a shared data path, and
instruction execution logic to process both single instruction
multiple data (SIMD) instruction and scalar instructions. The
processor architecture divides a vector into four "slots," each
including four bytes, and locates scalar data in "preferred slots"
to ensure proper positioning. Instructions using the preferred slot
mechanism include 1) shift and rotate instructions operating across
an entire quad-word that specify a shift amount, 2) memory load and
store instructions that require an address, and 3) branch
instructions that use the preferred slot for branch conditions
(conditional branches) and branch addresses (register-indirect
branches). As a result, the processor architecture eliminates the
requirement for separate issue slots, separate pipelines, and the
control complexity for separate scalar units.
Inventors: |
Gschwind; Michael Karl;
(Chappaqua, NY) ; Hofstee; Harm Peter; (Austin,
TX) ; Hopkins; Martin E.; (Bronxville, NY) ;
Kahle; James Allan; (Austin, TX) |
Correspondence
Address: |
IBM CORPORATION- AUSTIN (JVL);C/O VAN LEEUWEN & VAN LEEUWEN
PO BOX 90609
AUSTIN
TX
78709-0609
US
|
Family ID: |
25219414 |
Appl. No.: |
11/461554 |
Filed: |
August 1, 2006 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
09816004 |
Mar 22, 2001 |
7233998 |
|
|
11461554 |
Aug 1, 2006 |
|
|
|
Current U.S.
Class: |
712/3 |
Current CPC
Class: |
H04L 63/168 20130101;
H04L 67/34 20130101; G06F 9/30061 20130101; H04L 67/10 20130101;
G06F 15/16 20130101; G06F 9/4862 20130101; H04L 29/06027
20130101 |
Class at
Publication: |
712/003 |
International
Class: |
G06F 15/00 20060101
G06F015/00 |
Claims
1. A microprocessor comprising: a shared data path that processes a
vector register, wherein the vector register is selected from a
plurality of vector registers included in a vector register file,
and wherein each of the vector registers in the vector register
file stores one of two types of data at a point in time, wherein
the first type of data is parallel data and the second type of data
is scalar data, the parallel data corresponding to data-parallel
processing of an input program, and the scalar data corresponding
to processing of a single data value of the input program; and
instruction execution logic coupled to the shared data path, the
instruction execution logic processing the selected vector register
in its entirety.
2. The microprocessor of claim 1 wherein the vector register file
further comprises: a plurality of read data access ports, wherein
each of the plurality of read data access ports require reading
from the selected vector register in its entirety in response to a
read request; and a plurality of write data access ports, wherein
each of the plurality of write data access ports require writing to
the selected vector register in its entirety in response to a write
request.
3. The microprocessor of claim 2 wherein the instruction execution
logic is adapted to: execute a memory access instruction, wherein
the memory access instruction performs reading address information
from one of the plurality of read data access ports.
4. The microprocessor of claim 3 wherein the memory access
instruction uses an address generated by adding a data address to a
data address offset, the data address included in a preferred slot
of the selected vector register, which is read using a first read
data access port included in the plurality of read data access
ports, and the data address offset included in a preferred slot of
a second vector register included in the plurality of vector
registers, which is read using a second read data access port
included in the plurality of read data access ports.
5. The microprocessor of claim 2 wherein the instruction execution
logic executes a data formatting instruction to insert at least one
byte of data stored in the selected vector register into a second
vector included in the plurality of vector registers, and wherein
data formatting information corresponds to a relative position of
at least one byte of information relative to a memory address of a
vector that is stored in one of a local store and a memory, and
wherein said first vector, second vector and the data formatting
information are retrieved using the plurality of read data access
ports.
6. The microprocessor of claim 2 further comprising: branch
execution logic that executes a branch to register instruction,
wherein the branch execution logic retrieves a branch target
address from a preferred slot of the selected vector register using
one of the plurality of read data access ports.
7. The microprocessor of claim 2 further comprising: branch
execution logic that executes a conditional branch wherein a branch
condition is retrieved by testing a condition stored in the
preferred slot of the selected vector register using one of the
plurality of read data access ports.
8. The microprocessor of claim 2 further comprising: branch
execution logic that executes a branch and link instruction wherein
a link address is stored in the selected vector register using one
of the plurality of write data access ports.
9. The microprocessor of claim 8 wherein the vector register file
includes a plurality of slots, and wherein a write of link
information includes: an address in a first slot included in the
plurality of slots; and wherein the remaining plurality of slots
include values that are selected from the group consisting of a
zero value, a predefined value, and an undefined value.
10. The microprocessor of claim 9 wherein the microprocessor
performs a code sequence that implements a function call return by
executing a branch to register with a register specification
corresponding to a specified register of a link instruction.
11. The microprocessor of claim 2 wherein a select instruction
performs a bitwise select between two data values under control of
a selection word stored in the selected vector register using one
of the plurality of read ports.
12. The microprocessor of claim 1 wherein a rotate or shift
instruction is performed under control of a count specified in a
preferred slot, the count or shift being adapted to ignore
high-order bits of the count.
13. The microprocessor of claim 12 wherein the rotate or shift
instruction is used to implement a load and align sequence of a
scalar word with a two instruction sequence comprising: a first
load instruction receiving an address to load an aligned vector
word by ignoring a set of low order bits corresponding to a vector
length; and a second rotate or shift instruction receiving the
address to align the scalar word by performing a rotate specified
by the address, and ignoring high-order bits of the address that do
not correspond to a vector length.
14. The microprocessor of claim 13 wherein data formatting
information is used to extract data included in an entire vector
register that is included in the plurality of vector registers.
15. The microprocessor of claim 1 wherein a data access instruction
specifies an address in a local store operatively coupled to the
microprocessor.
16. The microprocessor of claim 1 wherein the microprocessor
executes an instruction to generate a data vector in the vector
register file, wherein a first data word included in the data
vector is used for additional computation, and at least one word in
the data vector is not used for additional computation.
17. The microprocessor of claim 1 wherein a preferred slot is
specified as a location to obtain a single data word from the
selected vector register for instructions requiring a single data
word input.
18. The microprocessor of claim 17 wherein the preferred slot is
located at a leftmost word element slot included in each of the
plurality of vector registers.
19. The microprocessor of claim 1 wherein each of the plurality of
vector registers stores one of a plurality of data types at a point
in time.
20. A computer-implemented method comprising: selecting a vector
register from a plurality of vector registers included in a vector
register file, wherein each of the vector registers in the vector
register file stores one of two types of data at a point in time,
wherein the first type of data is parallel data and the second type
of data is scalar data, the parallel data corresponding to
data-parallel processing of an input program, and the scalar data
corresponding to processing of a single data value of the input
program; and processing the data included in the selected vector
register in its entirety, wherein the processing includes obtaining
the scalar data from a predefined range of bytes included in the
selected vector register.
Description
RELATED APPLICATIONS
[0001] This application is a Continuation in Part (CIP) of U.S.
Patent Application US 2002/0138637 A1, Ser. No. 09/816,004, filed
on Mar. 22, 2001 titled "Computer Architecture and Software Cells
for Broadband Networks," and has at least one of the same inventors
as the above referenced U.S. Patent Application.
BACKGROUND OF THE INVENTION
[0002] 1. Technical Field
[0003] The present invention relates to a system and method for
executing instructions utilizing a preferred slot alignment
mechanism. More particularly, the present invention relates to a
processor architecture that includes a vector register file, a
shared data path, and instruction execution logic to process source
operands that correspond to both Single Instruction Multiple Data
(SIMD) computations and scalar computations.
[0004] 2. Description of the Related Art
[0005] A continuing importance of gaming applications and other
numerically intensive workloads has generated an upsurge in novel
computer architectures tailored for such functionality. Gaming
applications feature highly parallel code for functions such as
game physics, which have high computation and memory requirements.
Gaming applications also include scalar code for functions such as
game artificial intelligence that require fast response times and a
full-featured programming environment.
[0006] A challenge found with these computer architectures is that
they have overly complex designs, which results in area and power
inefficiencies. For example, the computer architectures implement
both Single Instruction Multiple Data (SIMD) execution units as
well as scalar execution units. As a result, they include
duplication logic for instruction decoding, instruction issue,
register dependence tracking and resolution, register files,
execution resources, and instruction commit.
[0007] What is needed, therefore, is a system and method that
provides a power-efficient, area-efficient, low-complexity, and
high performance computer architecture.
SUMMARY
[0008] It has been discovered that the aforementioned challenges
are resolved using a processor architecture that uses a vector
register file, a shared data path, and instruction execution logic
to process source operands that correspond to both Single
Instruction Multiple Data (SIMD) computations and scalar
computations. The processor architecture divides a vector into four
"slots," each including four bytes, and locates scalar data items
in "preferred slots" to ensure proper positioning. As a result, the
processor architecture eliminates a requirement for separate issue
slots, separate pipelines, and the control complexity for separate
scalar units.
[0009] A local storage area includes instructions that are fed into
a buffer in 128-byte increments, which supplies the instructions to
a fetch unit in 64 byte increments (representing a first and second
half of a memory line). In turn, the instructions proceed through a
shared datapath that includes instruction line buffers,
issue/branch units, and a vector register file. The vector register
file provides operands in data widths of 16 bytes, regardless of
whether the instruction corresponds to a scalar computation or SIMD
computation, to an appropriate execution unit for further
processing, such as a vector floating point unit, a vector fixed
point unit, a data formatting and permute unit, and a load/store
unit.
[0010] In order to process the scalar instructions correctly,
scalar data items are aligned using a "preferred slot" mechanism
with respect to a vector word. Instructions using the preferred
slot mechanism include 1) shift and rotate instructions operating
across an entire quad-word that specify a shift amount, 2) memory
load and store instructions that require an address, and 3) branch
instructions that use the preferred slot for branch conditions
(conditional branches) and branch addresses (register-indirect
branches). Branch and link instructions also use the preferred slot
mechanism to deposit a function return address in a return address
register.
[0011] In one embodiment, the preferred slot is four bytes in
length and starts at the leftmost word element slot that includes
byte locations 0 through 3. As such, when a scalar data item is
only one byte in length the byte resides in byte location 3. When a
scalar data item is a half-word in length, the half-word resides in
byte locations 2-3. When a vector includes a 32-bit address, the
address resides in byte locations 0-3. When a scalar data item is
one word in length, the word resides in byte locations 0-3. When a
scalar data item is two words in length, the double word resides in
byte locations 0-7. And, when a scalar data item is four words in
length, the quad word resides in byte locations 0-15.
[0012] The foregoing is a summary and thus contains, by necessity,
simplifications, generalizations, and omissions of detail;
consequently, those skilled in the art will appreciate that the
summary is illustrative only and is not intended to be in any way
limiting.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The present invention may be better understood, and its
numerous objects, features, and advantages made apparent to those
skilled in the art by referencing the accompanying drawings.
[0014] FIG. 1 illustrates the overall architecture of a computer
network;
[0015] FIG. 2 is a diagram illustrating the structure of a
processor element (PE);
[0016] FIG. 3 is a diagram illustrating the structure of a
broadband engine (BE);
[0017] FIG. 4 is a diagram illustrating the structure of an
attached processing unit (APU);
[0018] FIG. 5 is a diagram illustrating the structure of a
processor element, visualizer (VS) and an optical interface;
[0019] FIG. 6 is a diagram illustrating one combination of
processor elements;
[0020] FIG. 7 illustrates another combination of processor
elements;
[0021] FIG. 8 illustrates yet another combination of processor
elements;
[0022] FIG. 9 illustrates yet another combination of processor
elements;
[0023] FIG. 10 illustrates yet another combination of processor
elements;
[0024] FIG. 11A illustrates the integration of optical interfaces
within a chip package;
[0025] FIG. 11B is a diagram of one configuration of processors
using the optical interfaces of FIG. 11A;
[0026] FIG. 11C is a diagram of another configuration of processors
using the optical interfaces of FIG. 11A;
[0027] FIG. 12A illustrates the structure of a memory system;
[0028] FIG. 12B illustrates the writing of data from a first
broadband engine to a second broadband engine;
[0029] FIG. 13 is a diagram of the structure of a shared memory for
a processor element;
[0030] FIG. 14A illustrates one structure for a bank of the memory
shown in FIG. 13;
[0031] FIG. 14B illustrates another structure for a bank of the
memory shown in FIG. 13;
[0032] FIG. 15 illustrates a structure for a direct memory access
controller;
[0033] FIG. 16 illustrates an alternative structure for a direct
memory access controller;
[0034] FIGS. 17A-17O illustrate the operation of data
synchronization;
[0035] FIG. 18 is a three-state memory diagram illustrating the
various states of a memory location in accordance with the data
synchronization scheme of the-present invention;
[0036] FIG. 19 illustrates the structure of a key control table for
a hardware sandbox;
[0037] FIG. 20 illustrates a scheme for storing memory access keys
for a hardware sandbox;
[0038] FIG. 21 illustrates the structure of a memory access control
table for a hardware sandbox;
[0039] FIG. 22 is a flow diagram of the steps for accessing a
memory sandbox using the key control table of FIG. 19 and the
memory access control table of FIG. 21;
[0040] FIG. 23 illustrates the structure of a software cell;
[0041] FIG. 24 is a flow diagram of the steps for issuing remote
procedure calls to APUs;
[0042] FIG. 25 illustrates the structure of a dedicated pipeline
for processing streaming data;
[0043] FIG. 26 is a flow diagram of the steps performed by the
dedicated pipeline of FIG. 25 in the processing of streaming
data;
[0044] FIG. 27 illustrates an alternative structure for a dedicated
pipeline for the processing of streaming data;
[0045] FIG. 28 illustrates a scheme for an absolute timer for
coordinating the parallel processing of applications and data by
APUs;
[0046] FIG. 29 is a diagram showing a processor that uses a vector
register file, a shared data path, and instruction execution logic
to process single instruction multiple data (SIMD) and scalar
instructions;
[0047] FIG. 30A is a diagram showing two vectors added together
that do not require re-alignment;
[0048] FIG. 30B is a diagram showing two vectors that include
mis-aligned scalar data;
[0049] FIG. 31 is a diagram showing scalar data aligned in
registers based upon a preferred slot mechanism;
[0050] FIG. 32A is a diagram showing scalar data values included in
vectors that are rotated to a preferred slot before being added
together;
[0051] FIG. 32B is a diagram showing a read-modify-write sequence
to store a scalar data via a quadword-oriented storage
interface;
[0052] FIG. 33A is a diagram showing an instruction that adds two
data values;
[0053] FIG. 33B is a flowchart showing steps taken in reading an
entire vector and operating on the entire vector;
[0054] FIG. 34A is a diagram showing an instruction that loads a
data value to a register file;
[0055] FIG. 34B is a diagram showing steps taken in loading a data
value into a register file location;
[0056] FIG. 35 is a block diagram showing microprocessor components
used for executing a load instruction in accordance with an
embodiment of the present invention;
[0057] FIG. 36A is a diagram showing an instruction that stores a
quadword;
[0058] FIG. 36B is a block diagram showing microprocessor
components used for executing a quadword store instruction in
accordance with an embodiment of the present invention;
[0059] FIG. 37A is a diagram showing an instruction that performs a
branch relative and set link instruction;
[0060] FIG. 37B is a flowchart showing steps taken in performing a
branch relative and set link instruction;
[0061] FIG. 37C is a block diagram showing microprocessor
components used for setting a vector to a link register when
executing a branch relative and set link instruction in accordance
with an embodiment of the present invention;
[0062] FIG. 38A is a diagram showing an instruction that performs a
branch indirect instruction;
[0063] FIG. 38B is a block diagram showing microprocessor
components used for executing a branch indirect instruction in
accordance with an embodiment of the present invention;
[0064] FIG. 39A is a diagram showing an instruction that performs a
branch if not zero word instruction;
[0065] FIG. 39B is a diagram showing an instruction that performs a
branch if zero halfword instruction;
[0066] FIG. 39C is a flowchart showing steps taken in performing
conditional branch instructions; and
[0067] FIG. 40 is a block diagram showing microprocessor components
used for executing a conditional branch instruction in accordance
with an embodiment of the present invention.
DETAILED DESCRIPTION
[0068] The following is intended to provide a detailed description
of an example of the invention and should not be taken to be
limiting of the invention itself. Rather, any number of variations
may fall within the scope of the invention, which is defined in the
claims following the description.
[0069] The overall architecture for a computer system 101 is shown
in FIG. 1. As illustrated in this figure, system 101 includes
network 104 to which is connected a plurality of computers and
computing devices. Network 104 can be a LAN, a global network, such
as the Internet, or any other computer network.
[0070] The computers and computing devices connected to network 104
(the network's "members" include, e.g., client computers 106,
server computers 108, personal digital assistants (PDAs) 110,
digital television (DTV) 112 and other wired or wireless computers
and computing devices. The processors employed by the members of
network 104 are constructed from the same common computing module.
These processors also preferably all have the same ISA and perform
processing in accordance with the same instruction set. The number
of modules included within any particular processor depends upon
the processing power required by that processor.
[0071] For example, since servers 108 of system 101 perform more
processing of data and applications than clients 106, servers 108
contain more computing modules than clients 106. PDAs 110, on the
other hand, perform the least amount of processing. PDAs 110,
therefore, contain the smallest number of computing modules. DTV
112 performs a level of processing between that of clients 106 and
servers 108. DTV 112, therefore, contains a number of computing
modules between that of clients 106 and servers 108. As discussed
below, each computing module contains a processing controller and a
plurality of identical processing units for performing parallel
processing of the data and applications transmitted over network
104.
[0072] This homogeneous configuration for system 101 facilitates
adaptability, processing speed and processing efficiency. Because
each member of system 101 performs processing using one or more (or
some fraction) of the same computing module, the particular
computer or computing device performing the actual processing of
data and applications is unimportant. The processing of a
particular application and data, moreover, can be shared among the
network's members. By uniquely identifying the cells comprising the
data and applications processed by system 101 throughout the
system, the processing results can be transmitted to the computer
or computing device requesting the processing regardless of where
this processing occurred. Because the modules performing this
processing have a common structure and employ a common ISA, the
computational burdens of an added layer of software to achieve
compatibility among the processors is avoided. This architecture
and programming model facilitates the processing speed necessary to
execute, e.g., real-time, multimedia applications.
[0073] To take further advantage of the processing speeds and
efficiencies facilitated by system 101, the data and applications
processed by this system are packaged into uniquely identified,
uniformly formatted software cells 102. Each software cell 102
contains, or can contain, both applications and data. Each software
cell also contains an ID to globally identify the cell throughout
network 104 and system 101. This uniformity of structure for the
software cells, and the software cells' unique identification
throughout the network, facilitates the processing of applications
and data on any computer or computing device of the network. For
example, a client 106 may formulate a software cell 102 but,
because of the limited processing capabilities of client 106,
transmit this software cell to a server 108 for processing.
Software cells can migrate, therefore, throughout network 104 for
processing on the basis of the availability of processing resources
on the network.
[0074] The homogeneous structure of processors and software cells
of system 101 also avoids many of the problems of today's
heterogeneous networks. For example, inefficient programming
models, which seek to permit processing of applications on any ISA
using any instruction set, e.g., virtual machines such as the Java
virtual machine, are avoided. System 101, therefore, can implement
broadband processing far more effectively and efficiently than
today's networks.
[0075] The basic processing module for all members of network 104
is the processor element (PE). FIG. 2 illustrates the structure of
a PE. As shown in this figure, PE 201 comprises a processing unit
(PU) 203, a direct memory access controller (DMAC) 205 and a
plurality of attached processing units (APUs), namely, APU 207, APU
209, APU 211, APU 213, APU 215, APU 217, APU 219 and APU 221. A
local PE bus 223 transmits data and applications among the APUs,
DMAC 205 and PU 203. Local PE bus 223 can have, e.g., a
conventional architecture or be implemented as a packet switch
network. Implementation as a packet switch network, while requiring
more hardware, increases available bandwidth.
[0076] PE 201 can be constructed using various methods for
implementing digital logic. PE 201 preferably is constructed,
however, as a single integrated circuit employing a complementary
metal oxide semiconductor (CMOS) on a silicon substrate.
Alternative materials for substrates include gallium arsinide,
gallium aluminum arsinide and other so-called III-B compounds
employing a wide variety of dopants. PE 201 also could be
implemented using superconducting material, e.g., rapid
single-flux-quantum (RSFQ) logic.
[0077] PE 201 is closely associated with a dynamic random access
memory (DRAM) 225 through a high bandwidth memory connection 227.
DRAM 225 functions as the main memory for PE 201. Although a DRAM
225 preferably is a dynamic random access memory, DRAM 225 could be
implemented using other means, e.g., as a static random access
memory (SRAM), a magnetic random access memory (MRAM), an optical
memory or a holographic memory. DMAC 205 facilitates the transfer
of data between DRAM 225 and the APUs and PU of PE 201. As further
discussed below, DMAC 205 designates for each APU an exclusive area
in DRAM 225 into which only the APU can write data and from which
only the APU can read data. This exclusive area is designated a
"sandbox."
[0078] PU 203 can be, e.g., a standard processor capable of
stand-alone processing of data and applications. In operation, PU
203 schedules and orchestrates the processing of data and
applications by the APUs. The APUs preferably are single
instruction, multiple data (SIMD) processors. Under the control of
PU 203, the APUs perform the processing of these data and
applications in a parallel and independent manner. DMAC 205
controls accesses by PU 203 and the APUs to the data and
applications stored in the shared DRAM 225. Although PE 201
preferably includes eight APUs, a greater or lesser number of APUs
can be employed in a PE depending upon the processing power
required. Also, a number of PEs, such as PE 201, may be joined or
packaged together to provide enhanced processing power.
[0079] For example, as shown in FIG. 3, four PEs may be packaged or
joined together, e.g., within one or more chip packages, to form a
single processor for a member of network 104. This configuration is
designated a broadband engine (BE). As shown in FIG. 3, BE 301
contains four PEs, namely, PE 303, PE 305, PE 307 and PE 309.
Communications among these PEs are over BE bus 311. Broad bandwidth
memory connection 313 provides communication between shared DRAM
315 and these PEs. In lieu of BE bus 311, communications among the
PEs of BE 301 can occur through DRAM 315 and this memory
connection.
[0080] Input/output (I/O) interface 317 and external bus 319
provide communications between broadband engine 301 and the other
members of network 104. Each PE of BE 301 performs processing of
data and applications in a parallel and independent manner
analogous to the parallel and independent processing of
applications and data performed by the APUs of a PE.
[0081] FIG. 4 illustrates the structure of an APU. APU 402 includes
local memory 406, registers 410, four floating point units 412 and
four integer units 414. Again, however, depending upon the
processing power required, a greater or lesser number of floating
points units 512 and integer units 414 can be employed. In a
preferred embodiment, local memory 406 contains 128 kilobytes of
storage, and the capacity of registers 410 is 128 times 128 bits.
Floating point units 412 preferably operate at a speed of 32
billion floating point operations per second (32 GFLOPS), and
integer units 414 preferably operate at a speed of 32 billion
operations per second (32 GOPS).
[0082] Local memory 402 is not a cache memory. Local memory 402 is
preferably constructed as an SRAM. Cache coherency support for an
APU is unnecessary. A PU may require cache coherency support for
direct memory accesses initiated by the PU. Cache coherency support
is not required, however, for direct memory accesses initiated by
an APU or for accesses from and to external devices.
[0083] APU 402 further includes bus 404 for transmitting
applications and data to and from the APU. In a preferred
embodiment, this bus is 1,024 bits wide. APU 402 further includes
internal busses 408, 420 and 418. In a preferred embodiment, bus
408 has a width of 256 bits and provides communications between
local memory 406 and registers 410. Busses 420 and 418 provide
communications between, respectively, registers 410 and floating
point units 412, and registers 410 and integer units 414. In a
preferred embodiment, the width of busses 418 and 420 from
registers 410 to the floating point or integer units is 384 bits,
and the width of busses 418 and 420 from the floating point or
integer units to registers 410 is 128 bits. The larger width of
these busses from registers 410 to the floating point or integer
units than from these units to registers 410 accommodates the
larger data flow from registers 410 during processing. A maximum of
three words are needed for each calculation. The result of each
calculation, however, normally is only one word.
[0084] FIGS. 5-10 further illustrate the modular structure of the
processors of the members of network 104. For example, as shown in
FIG. 5, a processor may comprise a single PE 502. As discussed
above, this PE typically comprises a PU, DMAC and eight APUs. Each
APU includes local storage (LS). On the other hand, a processor may
comprise the structure of visualizer (VS) 505. As shown in FIG. 5,
VS 505 comprises PU 512, DMAC 514 and four APUs, namely, APU 516,
APU 518, APU 520 and APU 522. The space within the chip package
normally occupied by the other four APUs of a PE is occupied in
this case by pixel engine 508, image cache 510 and cathode ray tube
controller (CRTC) 504. Depending upon the speed of communications
required for PE 502 or VS 505, optical interface 506 also may be
included on the chip package.
[0085] Using this standardized, modular structure, numerous other
variations of processors can be constructed easily and efficiently.
For example, the processor shown in FIG. 6 comprises two chip
packages, namely, chip package 602 comprising a BE and chip package
604 comprising four VSs. Input/output (I/O) 606 provides an
interface between the BE of chip package 602 and network 104. Bus
608 provides communications between chip package 602 and chip
package 604. Input output processor (IOP) 610 controls the flow of
data into and out of I/O 606. I/O 606 may be fabricated as an
application specific integrated circuit (ASIC). The output from the
VSs is video signal 612.
[0086] FIG. 7 illustrates a chip package for a BE 702 with two
optical interfaces 704 and 706 for providing ultra high speed
communications to the other members of network 104 (or other chip
packages locally connected). BE 702 can function as, e.g., a server
on network 104.
[0087] The chip package of FIG. 8 comprises two PEs 802 and 804 and
two VSs 806 and 808. An I/O 810 provides an interface between the
chip package and network 104. The output from the chip package is a
video signal. This configuration may function as, e.g., a graphics
work station.
[0088] FIG. 9 illustrates yet another configuration. This
configuration contains one-half of the processing power of the
configuration illustrated in FIG. 8. Instead of two PEs, one PE 902
is provided, and instead of two VSs, one VS 904 is provided. I/O
906 has one-half the bandwidth of the I/O illustrated in FIG. 8.
Such a processor also may function, however, as a graphics work
station.
[0089] A final configuration is shown in FIG. 10. This processor
consists of only a single VS 1002 and an I/O 1004. This
configuration may function as, e.g., a PDA.
[0090] FIG. 11A illustrates the integration of optical interfaces
into a chip package of a processor of network 104. These optical
interfaces convert optical signals to electrical signals and
electrical signals to optical signals and can be constructed from a
variety of materials including, e.g., gallium arsinide, aluminum
gallium arsinide, germanium and other elements or compounds. As
shown in this figure, optical interfaces 1104 and 1106 are
fabricated on the chip package of BE 1102. BE bus 1108 provides
communication among the PEs of BE 1102, namely, PE 1110, PE 1112,
PE 1114, PE 1116, and these optical interfaces. Optical interface
1104 includes two ports, namely, port 1118 and port 1120, and
optical interface 1106 also includes two ports, namely, port 1122
and port 1124. Ports 1118, 1120, 1122 and 1124 are connected to,
respectively, optical wave guides 1126, 1128, 1130 and 1132.
Optical signals are transmitted to and from BE 1102 through these
optical wave guides via the ports of optical interfaces 1104 and
1106.
[0091] A plurality of BEs can be connected together in various
configurations using such optical wave guides and the four optical
ports of each BE. For example, as shown in FIG. 11B, two or more
BEs, e.g., BE 1152, BE 1154 and BE 1156, can be connected serially
through such optical ports. In this example, optical interface 1166
of BE 1152 is connected through its optical ports to the optical
ports of optical interface 1160 of BE 1154. In a similar manner,
the optical ports of optical interface 1162 on BE 1154 are
connected to the optical ports of optical interface 1164 of BE
1156.
[0092] A matrix configuration is illustrated in FIG. 11C. In this
configuration, the optical interface of each BE is connected to two
other BEs. As shown in this figure, one of the optical ports of
optical interface 1188 of BE 1172 is connected to an optical port
of optical interface 1182 of BE 1176. The other optical port of
optical interface 1188 is connected to an optical port of optical
interface 1184 of BE 1178. In a similar manner, one optical port of
optical interface 1190 of BE 1174 is connected to the other optical
port of optical interface 1184 of BE 1178. The other optical port
of optical interface 1190 is connected to an optical port of
optical interface 1186 of BE 1180. This matrix configuration can be
extended in a similar manner to other BEs.
[0093] Using either a serial configuration or a matrix
configuration, a processor for network 104 can be constructed of
any desired size and power. Of course, additional ports can be
added to the optical interfaces of the BEs, or to processors having
a greater or lesser number of PEs than a BE, to form other
configurations.
[0094] FIG. 12A illustrates the control system and structure for
the DRAM of a BE. A similar control system and structure is
employed in processors having other sizes and containing more or
less PEs. As shown in this figure, a cross-bar switch connects each
DMAC 1210 of the four PEs comprising BE 1201 to eight bank controls
1206. Each bank control 1206 controls eight banks 1208 (only four
are shown in the figure) of DRAM 1204. DRAM 1204, therefore,
comprises a total of sixty-four banks. In a preferred embodiment,
DRAM 1204 has a capacity of 64 megabytes, and each bank has a
capacity of 1 megabyte. The smallest addressable unit within each
bank, in this preferred embodiment, is a block of 1024 bits.
[0095] BE 1201 also includes switch unit 1212. Switch unit 1212
enables other APUs on BEs closely coupled to BE 1201 to access DRAM
1204. A second BE, therefore, can be closely coupled to a first BE,
and each APU of each BE can address twice the number of memory
locations normally accessible to an APU. The direct reading or
writing of data from or to the DRAM of a first BE from or to the
DRAM of a second BE can occur through a switch unit such as switch
unit 1212.
[0096] For example, as shown in FIG. 12B, to accomplish such
writing, the APU of a first BE, e.g., APU 1220 of BE 1222, issues a
write command to a memory location of a DRAM of a second BE, e.g.,
DRAM 1228 of BE 1226 (rather than, as in the usual case, to DRAM
1224 of BE 1222). DMAC 1230 of BE 1222 sends the write command
through cross-bar switch 1221 to bank control 1234, and bank
control 1234 transmits the command to an external port 1232
connected to bank control 1234. DMAC 1238 of BE 1226 receives the
write command and transfers this command to switch unit 1240 of BE
1226. Switch unit 1240 identifies the DRAM address contained in the
write command and sends the data for storage in this address
through bank control 1242 of BE 1226 to bank 1244 of DRAM 1228.
Switch unit 1240, therefore, enables both DRAM 1224 and DRAM 1228
to function as a single memory space for the APUs of BE 1222.
[0097] FIG. 13 shows the configuration of the sixty-four banks of a
DRAM. These banks are arranged into eight rows, namely, rows 1302,
1304, 1306, 1308, 1310, 1312, 1314 and 1316 and eight columns,
namely, columns 1320, 1322, 1324, 1326, 1328, 1330, 1332 and 1334.
Each row is controlled by a bank controller. Each bank controller,
therefore, controls eight megabytes of memory.
[0098] FIGS. 14A and 14B illustrate different configurations for
storing and accessing the smallest addressable memory unit of a
DRAM, e.g., a block of 1024 bits. In FIG. 14A, DMAC 1402 stores in
a single bank 1404 eight 1024 bit blocks 1406. In FIG. 14B, on the
other hand, while DMAC 1412 reads and writes blocks of data
containing 1024 bits, these blocks are interleaved between two
banks, namely, bank 1414 and bank 1416. Each of these banks,
therefore, contains sixteen blocks of data, and each block of data
contains 512 bits. This interleaving can facilitate faster
accessing of the DRAM and is useful in the processing of certain
applications.
[0099] FIG. 15 illustrates the architecture for a DMAC 1504 within
a PE. As illustrated in this figure, the structural hardware
comprising DMAC 1506 is distributed throughout the PE such that
each APU 1502 has direct access to a structural node 1504 of DMAC
1506. Each node executes the logic appropriate for memory accesses
by the APU to which the node has direct access.
[0100] FIG. 16 shows an alternative embodiment of the DMAC, namely,
a non-distributed architecture. In this case, the structural
hardware of DMAC 1606 is centralized. APUs 1602 and PU 1604
communicate with DMAC 1606 via local PE bus 1607. DMAC 1606 is
connected through a cross-bar switch to a bus 1608. Bus 1608 is
connected to DRAM 1610.
[0101] As discussed above, all of the multiple APUs of a PE can
independently access data in the shared DRAM. As a result, a first
APU could be operating upon particular data in its local storage at
a time during which a second APU requests these data. If the data
were provided to the second APU at that time from the shared DRAM,
the data could be invalid because of the first APU's ongoing
processing which could change the data's value. If the second
processor received the data from the shared DRAM at that time,
therefore, the second processor could generate an erroneous result.
For example, the data could be a specific value for a global
variable. If the first processor changed that value during its
processing, the second processor would receive an outdated value. A
scheme is necessary, therefore, to synchronize the APUs' reading
and writing of data from and to memory locations within the shared
DRAM. This scheme must prevent the reading of data from a memory
location upon which another APU currently is operating in its local
storage and, therefore, which are not current, and the writing of
data into a memory location storing current data.
[0102] To overcome these problems, for each addressable memory
location of the DRAM, an additional segment of memory is allocated
in the DRAM for storing status information relating to the data
stored in the memory location. This status information includes a
full/empty (F/E) bit, the identification of an APU (APU ID)
requesting data from the memory location and the address of the
APU's local storage (LS address) to which the requested data should
be read. An addressable memory location of the DRAM can be of any
size. In a preferred embodiment, this size is 1024 bits.
[0103] The setting of the F/E bit to 1 indicates that the data
stored in the associated memory location are current. The setting
of the F/E bit to 0, on the other hand, indicates that the data
stored in the associated memory location are not current. If an APU
requests the data when this bit is set to 0, the APU is prevented
from immediately reading the data. In this case, an APU ID
identifying the APU requesting the data, and an LS address
identifying the memory location within the local storage of this
APU to which the data are to be read when the data become current,
are entered into the additional memory segment.
[0104] An additional memory segment also is allocated for each
memory location within the local storage of the APUs. This
additional memory segment stores one bit, designated the "busy
bit." The busy bit is used to reserve the associated LS memory
location for the storage of specific data to be retrieved from the
DRAM. If the busy bit is set to 1 for a particular memory location
in local storage, the APU can use this memory location only for the
writing of these specific data. On the other hand, if the busy bit
is set to 0 for a particular memory location in local storage, the
APU can use this memory location for the writing of any data.
[0105] Examples of the manner in which the F/E bit, the APU ID, the
LS address and the busy bit are used to synchronize the reading and
writing of data from and to the shared DRAM of a PE are illustrated
in FIGS. 17A-17O.
[0106] As shown in FIG. 17A, one or more PEs, e.g., PE 1720,
interact with DRAM 1702. PE 1720 includes APU 1722 and APU 1740.
APU 1722 includes control logic 1724, and APU 1740 includes control
logic 1742. APU 1722 also includes local storage 1726. This local
storage includes a plurality of addressable memory locations 1728.
APU 1740 includes local storage 1744, and this local storage also
includes a plurality of addressable memory locations 1746. All of
these addressable memory locations preferably are 1024 bits in
size.
[0107] An additional segment of memory is associated with each LS
addressable memory location. For example, memory segments 1729 and
1734 are associated with, respectively, local memory locations 1731
and 1732, and memory segment 1752 is associated with local memory
location 1750. A "busy bit," as discussed above, is stored in each
of these additional memory segments. Local memory location 1732 is
shown with several Xs to indicate that this location contains
data.
[0108] DRAM 1702 contains a plurality of addressable memory
locations 1704, including memory locations 1706 and 1708. These
memory locations preferably also are 1024 bits in size. An
additional segment of memory also is associated with each of these
memory locations. For example, additional memory segment 1760 is
associated with memory location 1706, and additional memory segment
1762 is associated with memory location 1708. Status information
relating to the data stored in each memory location is stored in
the memory segment associated with the memory location. This status
information includes, as discussed above, the F/E bit, the APU ID
and the LS address. For example, for memory location 1708, this
status information includes F/E bit 1712, APU ID 1714 and LS
address 1716.
[0109] Using the status information and the busy bit, the
synchronized reading and writing of data from and to the shared
DRAM among the APUs of a PE, or a group of PEs, can be
achieved.
[0110] FIG. 17B illustrates the initiation of the synchronized
writing of data from LS memory location 1732 of APU 1722 to memory
location 1708 of DRAM 1702. Control 1724 of APU 1722 initiates the
synchronized writing of these data. Since memory location 1708 is
empty, F/E bit 1712 is set to 0. As a result, the data in LS
location 1732 can be written into memory location 1708. If this bit
were set to 1 to indicate that memory location 1708 is full and
contains current, valid data, on the other hand, control 1722 would
receive an error message and be prohibited from writing data into
this memory location.
[0111] The result of the successful synchronized writing of the
data into memory location 1708 is shown in FIG. 17C. The written
data are stored in memory location 1708, and F/E bit 1712 is set to
1. This setting indicates that memory location 1708 is full and
that the data in this memory location are current and valid.
[0112] FIG. 17D illustrates the initiation of the synchronized
reading of data from memory location 1708 of DRAM 1702 to LS memory
location 1750 of local storage 1744. To initiate this reading, the
busy bit in memory segment 1752 of LS memory location 1750 is set
to 1 to reserve this memory location for these data. The setting of
this busy bit to 1 prevents APU 1740 from storing other data in
this memory location.
[0113] As shown in FIG. 17E, control logic 1742 next issues a
synchronize read command for memory location 1708 of DRAM 1702.
Since F/E bit 1712 associated with this memory location is set to
1, the data stored in memory location 1708 are considered current
and valid. As a result, in preparation for transferring the data
from memory location 1708 to LS memory location 1750, F/E bit 1712
is set to 0. This setting is shown in FIG. 17F. The setting of this
bit to 0 indicates that, following the reading of these data, the
data in memory location 1708 will be invalid.
[0114] As shown in FIG. 17G, the data within memory location 1708
next are read from memory location 1708 to LS memory location 1750.
FIG. 17H shows the final state. A copy of the data in memory
location 1708 is stored in LS memory location 1750. F/E bit 1712 is
set to 0 to indicate that the data in memory location 1708 are
invalid. This invalidity is the result of alterations to these data
to be made by APU 1740. The busy bit in memory segment 1752 also is
set to 0. This setting indicates that LS memory location 1750 now
is available to APU 1740 for any purpose, i.e., this LS memory
location no longer is in a reserved state waiting for the receipt
of specific data. LS memory location 1750, therefore, now can be
accessed by APU 1740 for any purpose.
[0115] FIGS. 17I-170 illustrate the synchronized reading of data
from a memory location of DRAM 1702, e.g., memory location 1708, to
an LS memory location of an APU's local storage, e.g., LS memory
location 1752 of local storage 1744, when the F/E bit for the
memory location of DRAM 1702 is set to 0 to indicate that the data
in this memory location are not current or valid. As shown in FIG.
17I, to initiate this transfer, the busy bit in memory segment 1752
of LS memory location 1750 is set to 1 to reserve this LS memory
location for this transfer of data. As shown in FIG. 17J, control
logic 1742 next issues a synchronize read command for memory
location 1708 of DRAM 1702. Since the F/E bit associated with this
memory location, F/E bit 1712, is set to 0, the data stored in
memory location 1708 are invalid. As a result, a signal is
transmitted to control logic 1742 to block the immediate reading of
data from this memory location.
[0116] As shown in FIG. 17K, the APU ID 1714 and LS address 1716
for this read command next are written into memory segment 1762. In
this case, the APU ID for APU 1740 and the LS memory location for
LS memory location 1750 are written into memory segment 1762. When
the data within memory location 1708 become current, therefore,
this APU ID and LS memory location are used for determining the
location to which the current data are to be transmitted.
[0117] The data in memory location 1708 become valid and current
when an APU writes data into this memory location. The synchronized
writing of data into memory location 1708 from, e.g., memory
location 1732 of APU 1722, is illustrated in FIG. 17L. This
synchronized writing of these data is permitted because F/E bit
1712 for this memory location is set to 0.
[0118] As shown in FIG. 17M, following this writing, the data in
memory location 1708 become current and valid. APU ID 1714 and LS
address 1716 from memory segment 1762, therefore, immediately are
read from memory segment 1762, and this information then is deleted
from this segment. F/E bit 1712 also is set to 0 in anticipation of
the immediate reading of the data in memory location 1708. As shown
in FIG. 17N, upon reading APU ID 1714 and LS address 1716, this
information immediately is used for reading the valid data in
memory location 1708 to LS memory location 1750 of APU 1740. The
final state is shown in FIG. 17O. This figure shows the valid data
from memory location 1708 copied to memory location 1750, the busy
bit in memory segment 1752 set to 0 and F/E bit 1712 in memory
segment 1762 set to 0. The setting of this busy bit to 0 enables LS
memory location 1750 now to be accessed by APU 1740 for any
purpose. The setting of this F/E bit to 0 indicates that the data
in memory location 1708 no longer are current and valid.
[0119] FIG. 18 summarizes the operations described above and the
various states of a memory location of the DRAM based upon the
states of the F/E bit, the APU ID and the LS address stored in the
memory segment corresponding to the memory location. The memory
location can have three states. These three states are an empty
state 1880 in which the F/E bit is set to 0 and no information is
provided for the APU ID or the LS address, a full state 1882 in
which the F/E bit is set to 1 and no information is provided for
the APU ID or LS address and a blocking state 1884 in which the F/E
bit is set to 0 and information is provided for the APU ID and LS
address.
[0120] As shown in this figure, in empty state 1880, a synchronized
writing operation is permitted and results in a transition to full
state 1882. A synchronized reading operation, however, results in a
transition to the blocking state 1884 because the data in the
memory location, when the memory location is in the empty state,
are not current.
[0121] In full state 1882, a synchronized reading operation is
permitted and results in a transition to empty state 1880. On the
other hand, a synchronized writing operation in full state 1882 is
prohibited to prevent overwriting of valid data. If such a writing
operation is attempted in this state, no state change occurs and an
error message is transmitted to the APU's corresponding control
logic.
[0122] In blocking state 1884, the synchronized writing of data
into the memory location is permitted and results in a transition
to empty state 1880. On the other hand, a synchronized reading
operation in blocking state 1884 is prohibited to prevent a
conflict with the earlier synchronized reading operation which
resulted in this state. If a synchronized reading operation is
attempted in blocking state 1884, no state change occurs and an
error message is transmitted to the APU's corresponding control
logic.
[0123] The scheme described above for the synchronized reading and
writing of data from and to the shared DRAM also can be used for
eliminating the computational resources normally dedicated by a
processor for reading data from, and writing data to, external
devices. This input/output (I/O) function could be performed by a
PU. However, using a modification of this synchronization scheme,
an APU running an appropriate program can perform this function.
For example, using this scheme, a PU receiving an interrupt request
for the transmission of data from an I/O interface initiated by an
external device can delegate the handling of this request to this
APU. The APU then issues a synchronize write command to the I/O
interface. This interface in turn signals the external device that
data now can be written into the DRAM. The APU next issues a
synchronize read command to the DRAM to set the DRAM's relevant
memory space into a blocking state. The APU also sets to 1 the busy
bits for the memory locations of the APU's local storage needed to
receive the data. In the blocking state, the additional memory
segments associated with the DRAM's relevant memory space contain
the APU's ID and the address of the relevant memory locations of
the APU's local storage. The external device next issues a
synchronize write command to write the data directly to the DRAM's
relevant memory space. Since this memory space is in the blocking
state, the data are immediately read out of this space into the
memory locations of the APU's local storage identified in the
additional memory segments. The busy bits for these memory
locations then are set to 0. When the external device completes
writing of the data, the APU issues a signal to the PU that the
transmission is complete.
[0124] Using this scheme, therefore, data transfers from external
devices can be processed with minimal computational load on the PU.
The APU delegated this function, however, should be able to issue
an interrupt request to the PU, and the external device should have
direct access to the DRAM.
[0125] The DRAM of each PE includes a plurality of "sandboxes." A
sandbox defines an area of the shared DRAM beyond which a
particular APU, or set of APUs, cannot read or write data. These
sandboxes provide security against the corruption of data being
processed by one APU by data being processed by another APU. These
sandboxes also permit the downloading of software cells from
network 104 into a particular sandbox without the possibility of
the software cell corrupting data throughout the DRAM. In the
present invention, the sandboxes are implemented in the hardware of
the DRAMs and DMACs. By implementing these sandboxes in this
hardware rather than in software, advantages in speed and security
are obtained.
[0126] The PU of a PE controls the sandboxes assigned to the APUs.
Since the PU normally operates only trusted programs, such as an
operating system, this scheme does not jeopardize security. In
accordance with this scheme, the PU builds and maintains a key
control table. This key control table is illustrated in FIG. 19. As
shown in this figure, each entry in key control table 1902 contains
an identification (ID) 1904 for an APU, an APU key 1906 for that
APU and a key mask 1908. The use of this key mask is explained
below. Key control table 1902 preferably is stored in a relatively
fast memory, such as a static random access memory (SRAM), and is
associated with the DMAC. The entries in key control table 1902 are
controlled by the PU. When an APU requests the writing of data to,
or the reading of data from, a particular storage location of the
DRAM, the DMAC evaluates the APU key 1906 assigned to that APU in
key control table 1902 against a memory access key associated with
that storage location.
[0127] As shown in FIG. 20, a dedicated memory segment 2010 is
assigned to each addressable storage location 2006 of a DRAM 2002.
A memory access key 2012 for the storage location is stored in this
dedicated memory segment. As discussed above, a further additional
dedicated memory segment 2008, also associated with each
addressable storage location 2006, stores synchronization
information for writing data to, and reading data from, the
storage-location.
[0128] In operation, an APU issues a DMA command to the DMAC. This
command includes the address of a storage location 2006 of DRAM
2002. Before executing this command, the DMAC looks up the
requesting APU's key 1906 in key control table 1902 using the APU's
ID 1904. The DMAC then compares the APU key 1906 of the requesting
APU to the memory access key 2012 stored in the dedicated memory
segment 2010 associated with the storage location of the DRAM to
which the APU seeks access. If the two keys do not match, the DMA
command is not executed. On the other hand, if the two keys match,
the DMA command proceeds and the requested memory access is
executed.
[0129] An alternative embodiment is illustrated in FIG. 21. In this
embodiment, the PU also maintains a memory access control table
2102. Memory access control table 2102 contains an entry for each
sandbox within the DRAM. In the particular example of FIG. 21, the
DRAM contains 64 sandboxes. Each entry in memory access control
table 2102 contains an identification (ID) 2104 for a sandbox, a
base memory address 2106, a sandbox size 2108, a memory access key
2110 and an access key mask 2110. Base memory address 2106 provides
the address in the DRAM which starts a particular memory sandbox.
Sandbox size 2108 provides the size of the sandbox and, therefore,
the endpoint of the particular sandbox.
[0130] FIG. 22 is a flow diagram of the steps for executing a DMA
command using key control table 1902 and memory access control
table 2102. In step 2202, an APU issues a DMA command to the DMAC
for access to a particular memory location or locations within a
sandbox. This command includes a sandbox ID 2104 identifying the
particular sandbox for which access is requested. In step 2204, the
DMAC looks up the requesting APU's key 1906 in key control table
1902 using the APU's ID 1904. In step 2206, the DMAC uses the
sandbox ID 2104 in the command to look up in memory access control
table 2102 the memory access key 2110 associated with that sandbox.
In step 2208, the DMAC compares the APU key 1906 assigned to the
requesting APU to the access key 2110 associated with the sandbox.
In step 2210, a determination is made of whether the two keys
match. If the two keys do not match, the process moves to step 2212
where the DMA command does not proceed and an error message is sent
to either the requesting APU, the PU or both. On the other hand, if
at step 2210 the two keys are found to match, the process proceeds
to step 2214 where the DMAC executes the DMA command.
[0131] The key masks for the APU keys and the memory access keys
provide greater flexibility to this system. A key mask for a key
converts a masked bit into a wildcard. For example, if the key mask
1908 associated with an APU key 1906 has its last two bits set to
"mask," designated by, e.g., setting these bits in key mask 1908 to
1, the APU key can be either a 1 or a 0 and still match the memory
access key. For example, the APU key might be 1010. This APU key
normally allows access only to a sandbox having an access key of
1010. If the APU key mask for this APU key is set to 0001, however,
then this APU key can be used to gain access to sandboxes having an
access key of either 1010 or 1011. Similarly, an access key 1010
with a mask set to 0001 can be accessed by an APU with an APU key
of either 1010 or 1011. Since both the APU key mask and the memory
key mask can be used simultaneously, numerous variations of
accessibility by the APUs to the sandboxes can be established.
[0132] The present invention also provides a new programming model
for the processors of system 101. This programming model employs
software cells 102. These cells can be transmitted to any processor
on network 104 for processing. This new programming model also
utilizes the unique modular architecture of system 101 and the
processors of system 101.
[0133] Software cells are processed directly by the APUs from the
APU's local storage. The APUs do not directly operate on any data
or programs in the DRAM. Data and programs in the DRAM are read
into the APU's local storage before the APU processes these data
and programs. The APU's local storage, therefore, includes a
program counter, stack and other software elements for executing
these programs. The PU controls the APUs by issuing direct memory
access (DMA) commands to the DMAC.
[0134] The structure of software cells 102 is illustrated in FIG.
23. As shown in this figure, a software cell, e.g., software cell
2302, contains routing information section 2304 and body 2306. The
information contained in routing information section 2304 is
dependent upon the protocol of network 104. Routing information
section 2304 contains header 2308, destination ID 2310, source ID
2312 and reply ID 2314. The destination ID includes a network
address. Under the TCP/IP protocol, e.g., the network address is an
Internet protocol (IP) address. Destination ID 2310 further
includes the identity of the PE and APU to which the cell should be
transmitted for processing. Source ID 2314 contains a network
address and identifies the PE and APU from which the cell
originated to enable the destination PE and APU to obtain
additional information regarding the cell if necessary. Reply ID
2314 contains a network address and identifies the PE and APU to
which queries regarding the cell, and the result of processing of
the cell, should be directed.
[0135] Cell body 2306 contains information independent of the
network's protocol. The exploded portion of FIG. 23 shows the
details of cell body 2306. Header 2320 of cell body 2306 identifies
the start of the cell body. Cell interface 2322 contains
information necessary for the cell's utilization. This information
includes global unique ID 2324, required APUs 2326, sandbox size
2328 and previous cell ID 2330.
[0136] Global unique ID 2324 uniquely identifies software cell 2302
throughout network 104. Global unique ID 2324 is generated on the
basis of source ID 2312, e.g. the unique identification of a PE or
APU within source ID 2312, and the time and date of generation or
transmission of software cell 2302. Required APUs 2326 provides the
minimum number of APUs required to execute the cell. Sandbox size
2328 provides the amount of protected memory in the required APUs'
associated DRAM necessary to execute the cell. Previous cell ID
2330 provides the identity of a previous cell in a group of cells
requiring sequential execution, e.g., streaming data.
[0137] Implementation section 2332 contains the cell's core
information. This information includes DMA command list 2334,
programs 2336 and data 2338. Programs 2336 contain the programs to
be run by the APUs (called "apulets"), e.g., APU programs 2360 and
2362, and data 2338 contain the data to be processed with these
programs. DMA command list 2334 contains a series of DMA commands
needed to start the programs. These DMA commands include DMA
commands 2340, 2350, 2355 and 2358. The PU issues these DMA
commands to the DMAC.
[0138] DMA command 2340 includes VID 2342. VID 2342 is the virtual
ID of an APU which is mapped to a physical ID when the DMA commands
are issued. DMA command 2340 also includes load command 2344 and
address 2346. Load command 2344 directs the APU to read particular
information from the DRAM into local storage. Address 2346 provides
the virtual address in the DRAM containing this information. The
information can be, e.g., programs from programs section 2336, data
from data section 2338 or other data. Finally, DMA command 2340
includes local storage address 2348. This address identifies the
address in local storage where the information should be loaded.
DMA commands 2350 contain similar information. Other DMA commands
are also possible.
[0139] DMA command list 2334 also includes a series of kick
commands, e.g., kick commands 2355 and 2358. Kick commands are
commands issued by a PU to an APU to initiate the processing of a
cell. DMA kick command 2355 includes virtual APU ID 2352, kick
command 2354 and program counter 2356. Virtual APU ID 2352
identifies the APU to be kicked, kick command 2354 provides the
relevant kick command and program counter 2356 provides the address
for the program counter for executing the program. DMA kick command
2358 provides similar information for the same APU or another
APU.
[0140] As noted, the PUs treat the APUs as independent processors,
not co-processors. To control processing by the APUs, therefore,
the PU uses commands analogous to remote procedure calls. These
commands are designated "APU Remote Procedure Calls" (ARPCs). A PU
implements an ARPC by issuing a series of DMA commands to the DMAC.
The DMAC loads the APU program and its associated stack frame into
the local storage of an APU. The PU then issues an initial kick to
the APU to execute the APU Program.
[0141] FIG. 24 illustrates the steps of an ARPC for executing an
apulet. The steps performed by the PU in initiating processing of
the apulet by a designated APU are shown in the first portion 2402
of FIG. 24, and the steps performed by the designated APU in
processing the apulet are shown in the second portion 2404 of FIG.
24.
[0142] In step 2410, the PU evaluates the apulet and then
designates an APU for processing the apulet. In step 2412, the PU
allocates space in the DRAM for executing the apulet by issuing a
DMA command to the DMAC to set memory access keys for the necessary
sandbox or sandboxes. In step 2414, the PU enables an interrupt
request for the designated APU to signal completion of the apulet.
In step 2418, the PU issues a DMA command to the DMAC to load the
apulet from the DRAM to the local storage of the APU. In step 2420,
the DMA command is executed, and the apulet is read from the DRAM
to the APU's local storage. In step 2422, the PU issues a DMA
command to the DMAC to load the stack frame associated with the
apulet from the DRAM to the APU's local storage. In step 2423, the
DMA command is executed, and the stack frame is read from the DRAM
to the APU's local storage. In step 2424, the PU issues a DMA
command for the DMAC to assign a key to the APU to allow the APU to
read and write data from and to the hardware sandbox or sandboxes
designated in step 2412. In step 2426, the DMAC updates the key
control table (KTAB) with the key assigned to the APU. In step
2428, the PU issues a DMA command "kick" to the APU to start
processing of the program. Other DMA commands may be issued by the
PU in the execution of a particular ARPC depending upon the
particular apulet.
[0143] As indicated above, second portion 2404 of FIG. 24
illustrates the steps performed by the APU in executing the apulet.
In step 2430, the APU begins to execute the apulet in response to
the kick command issued at step 2428. In step 2432, the APU, at the
direction of the apulet, evaluates the apulet's associated stack
frame. In step 2434, the APU issues multiple DMA commands to the
DMAC to load data designated as needed by the stack frame from the
DRAM to the APU's local storage. In step 2436, these DMA commands
are executed, and the data are read from the DRAM to the APU's
local storage. In step 2438, the APU executes the apulet and
generates a result. In step 2440, the APU issues a DMA command to
the DMAC to store the result in the DRAM. In step 2442, the DMA
command is executed and the result of the apulet is written from
the APU's local storage to the DRAM. In step 2444, the APU issues
an interrupt request to the PU to signal that the ARPC has been
completed.
[0144] The ability of APUs to perform tasks independently under the
direction of a PU enables a PU to dedicate a group of APUs, and the
memory resources associated with a group of APUs, to performing
extended tasks. For example, a PU can dedicate one or more APUs,
and a group of memory sandboxes associated with these one or more
APUs, to receiving data transmitted over network 104 over an
extended period and to directing the data received during this
period to one or more other APUs and their associated memory
sandboxes for further processing. This ability is particularly
advantageous to processing streaming data transmitted over network
104, e.g., streaming MPEG or streaming ATRAC audio or video data. A
PU can dedicate one or more APUs and their associated memory
sandboxes to receiving these data and one or more other APUs and
their associated memory sandboxes to decompressing and further
processing these data. In other words, the PU can establish a
dedicated pipeline relationship among a group of APUs and their
associated memory sandboxes for processing such data.
[0145] In order for such processing to be performed efficiently,
however, the pipeline's dedicated APUs and memory sandboxes should
remain dedicated to the pipeline during periods in which processing
of apulets comprising the data stream does not occur. In other
words, the dedicated APUs and their associated sandboxes should be
placed in a reserved state during these periods. The reservation of
an APU and its associated memory sandbox or sandboxes upon
completion of processing of an apulet is called a "resident
termination." A resident termination occurs in response to an
instruction from a PU.
[0146] FIGS. 25, 26A and 26B illustrate the establishment of a
dedicated pipeline structure comprising a group of APUs and their
associated sandboxes for the processing of streaming data, e.g.,
streaming MPEG data. As shown in FIG. 25, the components of this
pipeline structure include PE 2502 and DRAM 2518. PE 2502 includes
PU 2504, DMAC 2506 and a plurality of APUs, including APU 2508, APU
2510 and APU 2512. Communications among PU 2504, DMAC 2506 and
these APUs occur through PE bus 2514. Wide bandwidth bus 2516
connects DMAC 2506 to DRAM 2518. DRAM 2518 includes a plurality of
sandboxes, e.g., sandbox 2520, sandbox 2522, sandbox 2524 and
sandbox 2526.
[0147] FIG. 26A illustrates the steps for establishing the
dedicated pipeline. In step 2610, PU 2504 assigns APU 2508 to
process a network apulet. A network apulet comprises a program for
processing the network protocol of network 104. In this case, this
protocol is the Transmission Control Protocol/Internet Protocol
(TCP/IP). TCP/IP data packets conforming to this protocol are
transmitted over network 104. Upon receipt, APU 2508 processes
these packets and assembles the data in the packets into software
cells 102. In step 2612, PU 2504 instructs APU 2508 to perform
resident terminations upon the completion of the processing of the
network apulet. In step 2614, PU 2504 assigns PUs 2510 and 2512 to
process MPEG apulets. In step 2615, PU 2504 instructs APUs 2510 and
2512 also to perform resident terminations upon the completion of
the processing of the MPEG apulets. In step 2616, PU 2504
designates sandbox 2520 as a source sandbox for access by APU 2508
and APU 2510. In step 2618, PU 2504 designates sandbox 2522 as a
destination sandbox for access by APU 2510. In step 2620, PU 2504
designates sandbox 2524 as a source sandbox for access by APU 2508
and APU 2512. In step 2622, PU 2504 designates sandbox 2526 as a
destination sandbox for access by APU 2512. In step 2624, APU 2510
and APU 2512 send synchronize read commands to blocks of memory
within, respectively, source sandbox 2520 and source sandbox 2524
to set these blocks of memory into the blocking state. The process
finally moves to step 2628 where establishment of the dedicated
pipeline is complete and the resources dedicated to the pipeline
are reserved. APUs 2508, 2510 and 2512 and their associated
sandboxes 2520, 2522, 2524 and 2526, therefore, enter the reserved
state.
[0148] FIG. 26B illustrates the steps for processing streaming MPEG
data by this dedicated pipeline. In step 2630, APU 2508, which
processes the network apulet, receives in its local storage TCP/IP
data packets from network 104. In step 2632, APU 2508 processes
these TCP/IP data packets and assembles the data within these
packets into software cells 102. In step 2634, APU 2508 examines
header 2320 (FIG. 23) of the software cells to determine whether
the cells contain MPEG data. If a cell does not contain MPEG data,
then, in step 2636, APU 2508 transmits the cell to a general
purpose sandbox designated within DRAM 2518 for processing other
data by other APUs not included within the dedicated pipeline. APU
2508 also notifies PU 2504 of this transmission.
[0149] On the other hand, if a software cell contains MPEG data,
then, in step 2638, APU 2508 examines previous cell ID 2330 (FIG.
23) of the cell to identify the MPEG data stream to which the cell
belongs. In step 2640, APU 2508 chooses an APU of the dedicated
pipeline for processing of the cell. In this case, APU 2508 chooses
APU 2510 to process these data. This choice is based upon previous
cell ID 2330 and load balancing factors. For example, if previous
cell ID 2330 indicates that the previous software cell of the MPEG
data stream to which the software cell belongs was sent to APU 2510
for processing, then the present software cell normally also will
be sent to APU 2510 for processing. In step 2642, APU 2508 issues a
synchronize write command to write the MPEG data to sandbox 2520.
Since this sandbox previously was set to the blocking state, the
MPEG data, in step 2644, automatically is read from sandbox 2520 to
the local storage of APU 2510. In step 2646, APU 2510 processes the
MPEG data in its local storage to generate video data. In step
2648, APU 2510 writes the video data to sandbox 2522. In step 2650,
APU 2510 issues a synchronize read command to sandbox 2520 to
prepare this sandbox to receive additional MPEG data. In step 2652,
APU 2510 processes a resident termination. This processing causes
this APU to enter the reserved state during which the APU waits to
process additional MPEG data in the MPEG data stream.
[0150] Other dedicated structures can be established among a group
of APUs and their associated sandboxes for processing other types
of data. For example, as shown in FIG. 27, a dedicated group of
APUs, e.g., APUs 2702, 2708 and 2714, can be established for
performing geometric transformations upon three dimensional objects
to generate two dimensional display lists. These two dimensional
display lists can be further processed (rendered) by other APUs to
generate pixel data. To perform this processing, sandboxes are
dedicated to APUs 2702, 2708 and 2414 for storing the three
dimensional objects and the display lists resulting from the
processing of these objects. For example, source sandboxes 2704,
2710 and 2716 are dedicated to storing the three dimensional
objects processed by, respectively, APU 2702, APU 2708 and APU
2714. In a similar manner, destination sandboxes 2706, 2712 and
2718 are dedicated to storing the display lists resulting from the
processing of these three dimensional objects by, respectively, APU
2702, APU 2708 and APU 2714.
[0151] Coordinating APU 2720 is dedicated to receiving in its local
storage the display lists from destination sandboxes 2706, 2712 and
2718. APU 2720 arbitrates among these display lists and sends them
to other APUs for the rendering of pixel data.
[0152] The processors of system 101 also employ an absolute timer.
The absolute timer provides a clock signal to the APUs and other
elements of a PE which is both independent of, and faster than, the
clock signal driving these elements. The use of this absolute timer
is illustrated in FIG. 28.
[0153] As shown in this figure, the absolute timer establishes a
time budget for the performance of tasks by the APUs. This time
budget provides a time for completing these tasks which is longer
than that necessary for the APUs' processing of the tasks. As a
result, for each task, there is, within the time budget, a busy
period and a standby period. All apulets are writ en for processing
on the basis of this time budget regardless of the APUs' actual
processing time or speed.
[0154] For example, for a particular APU of a PE, a particular task
may be performed during busy period 2802 of time budget 2804. Since
busy period 2802 is less than time budget 2804, a standby period
2806 occurs during the time budget. During this standby period, the
APU goes into a sleep mode during which less power is consumed by
the APU.
[0155] The results of processing a task are not expected by other
APUs, or other elements of a PE, until a time budget 2804 expires.
Using the time budget established by the absolute timer, therefore,
the results of the APUs' processing always are coordinated
regardless of the APUs' actual processing speeds.
[0156] In the future, the speed of processing by the APUs will
become faster. The time budget established by the absolute timer,
however, will remain the same. For example, as shown in FIG. 28, an
APU in the future will execute a task in a shorter period and,
therefore, will have a longer standby period. Busy period 2808,
therefore, is shorter than busy period 2802, and standby period
2810 is longer than standby period 2806. However, since programs
are written for processing on the basis of the same time budget
established by the absolute timer, coordination of the results of
processing among the APUs is maintained. As a result, faster APUs
can process programs written for slower APUs without causing
conflicts in the times at which the results of this processing are
expected.
[0157] In lieu of an absolute timer to establish coordination among
the APUs, the PU, or one or more designated APUs, can analyze the
particular instructions or microcode being executed by an APU in
processing an apulet for problems in the coordination of the APUs'
parallel processing created by enhanced or different operating
speeds. "No operation" ("NOOP" instructions can be inserted into
the instructions and executed by some of the APUs to maintain the
proper sequential completion of processing by the APUs expected by
the apulet. By inserting these NOOPs into the instructions, the
correct timing for the APUs' execution of all instructions can be
maintained.
[0158] FIG. 29 is a diagram showing a processor that uses a vector
register file, a shared data path, and instruction execution logic
to process single instruction multiple data (SIMD) and scalar
instructions. Attached processing unit (APU) 2900's architecture
promotes programmability by exploiting compiler techniques to
target data-parallel execution primitives. The architecture
provides fast, simple primitives, which the compiler uses to
implement higher-level idioms.
[0159] Over the past decade, microprocessors have become powerful
enough to tackle previously intractable tasks and cheap enough to
use in a range of new applications. Meanwhile, the volumes of data
to process have ballooned. This phenomenon is evident in everything
from consumer entertainment, which is transitioning from analog to
digital media, to supercomputing applications, which are starting
to address previously unsolvable computing problems involving
massive data volumes.
[0160] To address this shift from control function to data
processing, APU 2900 exploits data-level parallelism through a SIMD
architecture with the integration of scalar and SIMD execution. In
addition to improving the efficiency of many vectorization
transformations, this approach reduces the area and complexity
overhead that scalar processing imposes. Any complexity reduction
directly translates into increased performance because it enables
additional cores per given chip area.
[0161] Local store 2910 includes instructions that are fed into
buffer 2915 in 128-byte increments. Buffer 2915 separates the
instructions out into 64 byte increments (representing a first and
second portion of a memory line), which are supplied to fetch 2920.
The instructions proceed through a datapath that includes
instruction line buffers 2930, issue/branch 2940, and vector
register file 2950.
[0162] Instruction issue logic 2940 issues instruction for
execution in bundles of up two instructions. Each instruction is
four bytes wide and specifies up to three source operands to be
provided by the vector register file 2950 to execution units 2960,
2970, 2980, and 2990. In order to process scalar computations
correctly, the scalar data values are aligned with respect to the
vector words stored in vector register file 2950 using a "preferred
slot" mechanism (see FIG. 31 and corresponding text for further
details).
[0163] Vector register file 2950 then provides source operands in
16 byte increments (regardless of whether the instruction is
performing a computation corresponding to a scalar of SIMD
computation in the source program), to an appropriate execution
unit for further processing, such as vector floating point unit
2960, vector fixed point unit 2970, data formatting and permute
unit 2980, and load/store unit 2990.
[0164] FIG. 30A is a diagram showing two vectors added together
that do not require re-alignment. Execution unit 3020 adds vector
3000 to vector 3010 to produce vector 3030. Each of vectors 3000
and 3010 are 16 bytes in length, which are segmented into four
elements, or "slots," which are in four-byte increments. Vector
3000's slots include values x0, x1, x2, and x3. Similarly, vector
3010's slots include values y0, y1, y2, and y3. When added
together, they produce vector 3030 which includes z0, z1, z2, and
z3, where z0=x0+y0, z1=x1+y1, z2=x2+y2, and z3=x3+y3.
[0165] FIG. 30B is a diagram showing two vectors that include
mis-aligned scalar operations. Vector 3040 includes data value x,
which requires to be added to vector 3050's data value y. A problem
arises when adding these two vectors because vector 3040's data
value x resides in "slot 1" whereas vector 3050's data value Y
resides in "slot 2."
[0166] As a result, when execution unit 3060 adds vector 3040 to
vector 3050, the resulting vector (vector 3070) does not include
the correct data values. As can be seen, vector 3070 slot 1 equals
x+n5 and vector 3070's slot 2 equals n2+y. Therefore, in order to
add two vectors that include scalar operations, the invention
described herein uses a "preferred slot" alignment mechanism (see
FIG. 31 and corresponding text for further details).
[0167] FIG. 31 is a diagram showing scalar data words aligned in
registers based upon a preferred slot mechanism. Many instructions
require scalar operands, but in an architecture with only vector
registers, it is not sufficient to specify a register containing a
vector of multiple scalar values. As shown in FIG. 31, an exemplary
vector is divided into four "slots" that include four bytes each,
which are slot 0 3100, slot 1 3110, slot 2 3120, and slot 3 3130.
To resolve scalar operand references, the APU architecture
convention is to locate scalar operands in a vector's "preferred
slot," which as FIG. 31 shows, corresponds to the leftmost word
element slot, consisting of bytes 0 to 3 (slot 0 3100).
[0168] Instructions using the preferred slot mechanism include 1)
shift and rotate instructions operating across an entire quad-word
that specify a shift amount, 2) memory load and store instructions
that require an address, and 3) branch instructions that use the
preferred slot for branch conditions (conditional branches) and
branch addresses (register-indirect branches). Branch and link
instructions also use the preferred slot mechanism to deposit a
function return address in a return address register, which the
cell application binary interface (ABI) allocates to vector
register 0.
[0169] As can be seen in FIG. 31, in accordance with the preferred
slot definition, when a scalar data item is one byte in length,
such as that shown in register 0 3140, the byte resides in byte
location 3. When a scalar data item is a half-word in length, such
as that shown in register 1 3150, the half-word resides in byte
locations 2-3. When a vector includes a 32 bit address, such as
that shown in register 2 3160, the address resides in byte
locations 0-3. When a scalar data item is one word in length, such
as that shown in register 3 3170, the word resides in byte
locations 0-3. When a scalar data item is two words in length, such
as that shown in register 4 3180, the double word resides in byte
locations 0-7. And, when a scalar data item is four words in
length, such as that shown in register 5 3190, the quad word
resides in byte locations 0-15.
[0170] The preferred slot is an expected location for scalar
parameters to APU instructions. In one embodiment, scalar
computations may occur in any slot. The preferred slot also serves
as a software abstraction in the ABI to identify the location of
scalar parameters on function calls and as function return values.
In addition, interprocedural register allocation may choose
alternative locations to pass scalar values across function call
boundaries.
[0171] Since the APU architecture uses only vector instruction
forms, the scalar nature of an instruction can be inferred only
from how the compiler uses that instruction. Meaning, the compiler
selects a slot position in a vector in which to perform
intermediate computations and from which to retrieve the result.
The hardware is unaware of this use and always performs the
specified operation across all slots. Removing explicit scalar
indication allows the software to perform scalar operations in any
element slots of a vector. The compiler may optimize alignment
handling and eliminate previously compulsory scalar data alignment
to the preferred slot. Unifying instruction encoding in this way to
provide the same instruction forms for scalar and SIMD computations
allows more opcode bits available to encode operations with up to
four distinct operands from a 128-entry register file.
[0172] FIG. 32A is a diagram showing scalar data values included in
vectors that are rotated or shifted to a preferred slot before
being added together. Vector 3200 includes data value x located in
its "slot 1," and vector 3220 includes data value y located in its
"slot 2." In order to add x to y, the data values included in
vectors 3200 and 3200 proceed through a rotation process, which
results in vectors 3210 and 3230, respectively.
[0173] As can be seen, vector 3210 now includes data value x in its
preferred slot (slot 0), and vector 3230 now includes data value y
also in its preferred slot (slot 0). As such, vector 3210 may be
added to vector 3230, which produces vector 3240. Vector 3240
includes data value z, which equals x+y. In one embodiment, when
data value x and y are in the same slot before rotation (e.g., slot
2), the vectors may be added together, and the resultant vector may
be rotated to place the summation of x+y in the preferred slot.
[0174] In accordance with a preferred code generation method, a
compiler rotates or shifts scalar data items in a common slot
position. In one embodiment, this is the preferred slot. In
accordance with another embodiment, the preferred slot is chosen to
be the leftmost word slot, allowing the ability to rotate words
into the preferred slot with a single quadword rotate instruction
using low-order address bits (stored in the preferred slot of a
vector register) to specify the rotate count for word data. Those
skilled in the art will appreciate the ability to adapt concepts of
the preferred slot to other locations within a vector, and
appropriate alignment rotate or shift sequences accordingly. Those
skilled in the art will also understand the use of other
instruction sequences, such as those including but not limited to a
vector permute instruction.
[0175] FIG. 32B is a diagram showing a read-modify-write sequence
to store a scalar data via a quadword-oriented storage interface.
To process scalar operations, an APU uses a compiler-generated
layering sequence for memory accesses when it merges scalar data
into memory. The APU inserts a scalar element into a quadword by
using a shuffle instruction to route bytes of data from the two
input registers.
[0176] To implement the read-modify-write sequence, the APU also
supports a "generate controls for insertion" instruction, which
generates a control word to steer the shuffle instruction to insert
a byte, halfword or word element into a position the memory address
specifies. As can be seen in FIG. 32B, vector 3250 includes data
value z, which is inserted in vector 3260's "slot 3" (shown in
vector 3290). Control word 3270 instructs shuffle 3280 as to where
to insert vector 3250 into vector 3260.
[0177] In accordance with a code preferred code generation method,
the compiler generates 1) an instruction to insert a scalar data
value in a vector, 2) a load instruction to load an aligned vector
from memory, 3) a shuffle instruction to insert the scalar data
item to be stored into the vector retrieved from memory under
control of the control word, and 4) a store instruction to store
the aligned vector to memory.
[0178] FIG. 33A is a diagram showing an instruction that adds two
data values. Instruction 3300 includes addition opcode in bits
0-10, which instructs an APU to add the operand stored in register
RA to the operand stored in register RB, and place the result in
register RT such that: RT.sup.0:3.rarw.RA.sup.0:3+RB.sup.0:3
RT.sup.4:7.rarw.RA.sup.4:7+RB.sup.4:7
RT.sup.8:11.rarw.RA.sup.8:11+RB.sup.8:11
RT.sup.12:15.rarw.RA.sup.12:15+RB.sup.12:15
[0179] In accordance with the specification format for the APU
architecture, the following notations, functions and symbols are
used: [0180] RT, RA, RB, . . . : Registers referred to by the RT,
RA, RB specifiers referred to by the instruction word; [0181] I10:
Bit string corresponding to the I10 field of the instruction being
processed; [0182] I16, D, E, . . . : Other named fields as
specified in the instruction specification correspond to the value
of the specified field in the instruction; [0183] LSLR: Value of
the local store limit register; [0184] PC: Current instruction's
address; [0185] x.sup.0:3: Superscript indicates a byte range of
the expression being superscripted; [0186] .rarw.: Assignment;
[0187] +: Addition; [0188] .parallel.: Concatenation of bit
strings; [0189] &: Logical AND; [0190] RepLeftBit(bitsting,
integer): Replicate the leftmost bit of the bitstring to widen the
argument to a bit string with integer bits; [0191]
LocStor(address,integer): Access the local store and return integer
consecutive bytes starting at the specified address--those skilled
in the art will understand that in another embodiment, a local
store access can be replaced by an access to any memory hierarchy;
[0192] 0xHEXDIGITS: Indicates a hexadecimal number with hexadecimal
digits HEXDIGITS (0 . . . 9 and A . . . F); [0193] 0bBINDIGITS:
Indicates a binary number with binary digits BINDIGITS (0 and
1);
[0194] Those and other features of the APU specification will be
further clarified by consulting an exemplary APU implementation as
provided by the Cell SPU in accordance with the "Cell SPU
specification V1.0" and incorporated herein by reference.
[0195] FIG. 33B is a flowchart showing steps taken in reading an
entire vector and operating on the entire vector, e.g., to
implement the vector addition instruction of FIG. 33A. Instruction
execution commences at 3310, whereupon logic reads at least one
entire vector operand from a vector register file at step 3320. For
example, the instruction may specify that two vectors be added in
accordance with the instruction shown in FIG. 33A.
[0196] At step 3330, processing operates on the entire vector.
Using the example shown in FIG. 33A, processing adds the operand
included in register RA to the operand included in register RB by
performing an addition of the data elements in respective slots,
and places the result in register RT. Processing then writes the
entire vector back to memory (at step 3340) and ends at 3350.
[0197] FIG. 34A is a diagram showing an instruction that loads a
data value to a register file. Instruction 3400 includes load
opcode in bits 0-7, which instructs an APU to add a signed value
included in the I10 field (with four zero bits appended) to the
value in register RA's preferred slot, while forcing the rightmost
four bits of the sum to zero to compute a memory address:
LSA.rarw.(RepLeftBit(I10.parallel.0b0000,32)+RA0:3)&LSLR&0xFFFFFFF0
RT.rarw.LocStor(LSA, 16)
[0198] In one embodiment, this address is further formatted by
masking it with the contents specified by a local store limit
register. The resultant address is used to access a local store (or
other memory) and the sixteen bytes at the local store address are
placed into register RT (see FIGS. 34B, 35, and corresponding text
for further details).
[0199] FIG. 34B is a diagram showing steps taken in loading a data
value into a register file location. Processing commences at 3410,
whereupon processing reads an address base register at 3420
(specified by the RA field shown in FIG. 34A). At step 3430,
processing selects the address field (located in the preferred slot
of the vector register specified by the RB field shown in FIG. 34A)
and, at step 3440, processing selects an address displacement
(located in the I10 field shown in FIG. 34A).
[0200] Processing generates an address at step 3450 by adding the
base address stored in the specified slot of register RA to the
displacement I10, and formats the address under control of a local
store limit register (LSLR), which identifies the local storage
area's ending and specifies which address bits to use. At step
3470, processing accesses memory corresponding to the formatted
address, selects data words (step 3480), and stores the data words
in a register file (step 3490). Processing ends at 3495 (see FIG.
35 and corresponding text for further details).
[0201] FIG. 35 is a diagram showing an apparatus corresponding to
the execution of a load instruction. Instruction word register 3500
provides an integer (located in FIG. 34A's I10 field) as
displacement. The displacement is formatted by formatting logic
3520. Instruction word register 3500 also provides control
information to control logic 3510, which instructs multiplexer 3550
to select the formatted displacement generated by formatting logic
3520 for D-form memory instructions, or an address offset contained
in the preferred slot of a specified index vector register
(3540).
[0202] Those skilled in the art will appreciate that in one
embodiment, different displacement sizes may be supported, and
displacement formatting is performed under control of control logic
3510 to select a specific displacement format based on the
instruction's opcode. Vector fields that are not required (such as
slots 1, 2, 3 of vector registers 3540 and 3555) are labeled as
"DC", which means "don't care". In parts of the logic flow, bits
corresponding to DC values are not implemented.
[0203] Multiplexer 3557 selects an address base from one of a
preferred slot of a specified vector register storing the base
address for D-form loads (i.e., those using a register+displacement
addressing format) and X-form loads (i.e., those using a
register+register addressing format), 0 for A-form loads (i.e.,
those specifying an absolute address in their displacement field),
and IAR 3556 for R-form loads (i.e., those using a instruction
address+displacement addressing format). Vector 3555 includes a
base address in the preferred slot (the vector operand is specified
by FIG. 34A's RA field).
[0204] Adder 3559 adds the output of multiplexer 3550, which
provides an address offset, to the output of multiplexer 3557,
which provides an address base. Address formatting 3570 receives
the generated address, and formats the address based upon local
store limit register (LSLR) 3560. LSLR 3560 specifies which bits of
the generated address to use as an actual address. The resulting
address is stored in data address register 3575.
[0205] Once formatted, processing accesses local store 3580 through
data address register 3575 and selects a quadword with selection
logic 3590. In turn, processing stores the quadword to a vector
register file entry that is specified by the RT field shown in FIG.
34A.
[0206] FIG. 36A is a diagram showing an instruction that stores a
quadword. Instruction 3600 includes quadword store opcode in bits
0-10, which instructs an APU to generate an address by adding
register RA's preferred slot value to register RB's preferred slot
value while forcing the sum's rightmost four bits to zero. The
contents of register RT are then stored at the generated local
store address such that: LSA.rarw.(RA0:3+RB0:3) & LSLR &
0xFFFFFFF0 LocStor(LSA,16).rarw.RT
[0207] FIG. 36B is a diagram showing an apparatus for executing a
quadword store instruction. Vector 3630 includes an address offset
in its preferred slot. Vector operand 3630 is provided by a vector
register file entry that is specified by instruction 3600's RB
field shown in FIG. 36A.
[0208] Instruction word register 3610 provides control information
to control logic 3615, which instructs multiplexer 3640 to select
an address offset based on an instruction form. Multiplexer 3640
selects either the formatted displacement generated by formatting
logic 3620 for D-form memory instructions, or an address offset
contained in the preferred slot of a specified index vector
register (3630).
[0209] Multiplexer 3658 selects an address base from one of a
preferred slot of a specified vector register 3650 storing the base
address for D-form loads (i.e., those using a register+displacement
addressing format) and X-form loads (i.e., those using a
register+register addressing format), 0 for A-form loads (i.e.,
those specifying an absolute address in their displacement field),
and IAR 3656 for R-form loads (i.e., those using a instruction
address + displacement addressing format). Vector 3650 includes a
base address in the preferred slot. The vector operand is specified
by instruction 3600's RA field shown in FIG. 36A.
[0210] Adder 3665 adds multiplexer 3658's output, which provides an
address base, to multiplexer 3640's output. Address formatting 3670
receives the generated address, and formats the address based upon
local store limit register (LSLR) 3660, storing the result in data
address register DAR 3680.
[0211] Once formatted, processing stores store value 3655 in memory
at the address specified by DAR 3680 in one of a local store, a
memory hierarchy, or store queue using data address register 3680.
Store value 3655 is the value of a vector register file entry
specified by instruction 3600's RT field shown in FIG. 36A.
[0212] FIG. 37A is a diagram showing an instruction that performs a
branch relative and set link instruction. Instruction 3700 includes
branch relative and set link instruction opcode in bits 0-8. The
address of the target instruction, which is computed by adding the
value of the I16 field, extended on the right with two zero bits
whose result is treated as a signed quantity, to the address of the
branch relative and set link instruction. The preferred slot of
register RT is set to the address of the byte following the branch
relative and set link instruction, and the remaining slots of
register RT are set to zero such that: RT.sup.0:3.rarw.(PC+4) &
LSLR RT.sup.4:15.rarw.0 PC.rarw.(PC+RepLeftBit(I16
.parallel.0b00,32)) & LSLR
[0213] FIG. 37B is a flowchart showing steps taken in performing a
branch relative and set link instruction. Processing commences at
3710, whereupon processing selects an instruction address at step
3715. At step 3720, processing formats a link address vector and,
at step 3725, processing writes to the vector registers.
[0214] In step 3726, processing formats and selects a displacement
value (corresponding to instruction 3700's I16 value shown in FIG.
37A). At step 3727, processing generates an address and, at step
3728, formats the address under control of a local store limit
register LSLR. Processing initiates instruction execution logic
from the computed address at step 3729, and ends at 3730.
[0215] FIG. 37C is a diagram showing an apparatus for executing the
"set link" function of a branch relative and set link operation
(corresponding to steps 3715 through 3725 shown in FIG. 37B), which
is performed in parallel to a "branch relative" function in
accordance with the apparatus shown in FIG. 38B (corresponding to
steps 3726 through 3729 shown in FIG. 37B).
[0216] Instruction address register (IAR+4) 3760 includes a branch
instruction address incremented by four to indicate the address of
the next instruction following the branch instruction. This value
may be derived from the output of the IAR incrementing logic 3865
(shown in FIG. 38B), which is present in instruction fetch issue
logic 2940 (shown in FIG. 29). Link address 3770 includes the value
of IAR+4 register 3760 along with ninety-six other bits to form a
16B vector value to be written to the vector register file. In
accordance with one embodiment, these ninety-six bits are defined
to be "0".
[0217] In yet another embodiment, all four slots receive a copy of
the IAR+4 link value. In another embodiment, they correspond the
prior value of these bits in the RT register. In accordance with
another embodiment, they correspond to the value of bits 0 to 95 of
the RT register prior to instruction execution (i.e., the leftmost
96b of the RT register), which allows the implementation of a
history of the last four link addresses in a single vector
register. In accordance with yet another embodiment, they have an
undefined value. Those skilled in the art will be able to derive
yet other values to define these bits within the scope of the
present invention.
[0218] Instruction word register 3740 provides control information
to control logic 3750, which instructs result multiplexer 3780 to
select link address 3770. In turn, link address 3770 is stored in a
vector register file, whereby the preferred slot of the register
file entry specified by instruction 3600's RT field shown in FIG.
36A is set to the address of the byte following the branch relative
and set link instruction.
[0219] FIG. 38A is a diagram showing an instruction that performs a
branch indirect instruction. Instruction 3800 includes branch
indirect instruction opcode in bits 0-10. Execution proceeds with
the instruction addressed by register RA's preferred slot. The
rightmost two bits of the value in register RA are ignored and
assumed to be zero. Interrupts may be enabled or disabled with the
E or D feature bits, which are located in bits 12 and 13,
respectively, such that: PC.rarw.RA.sup.0:3 & LSLR &
0xFFFFFFFC if (E=0 and D=0) int. enable status is not modified if
(E=1 and D=0) enable interrupts at target if (E=0 and D=1) disable
interrupts at target if (E=1 and D=1) reserved
[0220] FIG. 38B is a diagram showing an apparatus that executes a
branch indirect instruction. The components described herein are
also used to perform branch operations of other branch
instructions, such as those of the "branch relative and set link"
instruction shown in FIG. 37A and corresponding to steps 3726
through 3729 shown in FIG. 37B. The components may also perform a
"branch conditional" instruction, which is shown in FIGS. 39A and
39B, and corresponding to step 3960 shown in FIG. 39C.
[0221] Vector 3840, which is received from a vector register file,
includes a base address in its preferred slot. The base address is
a value corresponding to the vector register file entry specified
by instruction 3800's RA field shown in FIG. 38A, which corresponds
to a target address for register indirect branches. Instruction
word register 3810 provides bits corresponding to a displacement
field which is formatted by formatting logic 3825, such as that
which is included in instruction 3700's I16 field shown in FIG.
37A. The displacement formatting is added to instruction address
register 3830, resulting in a branch address that corresponds to
R-form branches (PC-relative branches).
[0222] Instruction word register 3810 also provides control
information to control logic 3820, which instructs multiplexer 3850
to select between multiple address forms, such as a
register-indirect specified address (vector 3840) for indirect
branches, or computed PC-relative branch address computed by adder
3835 for R-form branches. Multiplexer 3850 may select addresses
corresponding to yet other addressing forms, such as an absolute
address (not shown). Multiplexer 3850 may also select a sequential
next instruction address computed by adder 3835 if no branch
instruction is present.
[0223] Multiplexer 3850's selection feeds into instruction fetch
address register (IFAR) 3860. When processing does not branch,
processing proceeds through loop 3865 whereupon processing
increments and processes the next instruction address. IFAR 3860's
output feeds into address formatting 3880, which is formatted using
LSLR 3870. Once formatted, the formatted address is passed to
memory hierarchy. A number of memory hierarchies may be employed,
including ones corresponding to a traditional cache-based main
memory hierarchy or a novel local store based memory hierarchy
using DMA engines to transfer instructions streams from and to main
memory.
[0224] FIG. 39A is a diagram showing an instruction that performs a
branch if not zero word instruction. Instruction 3900 includes
branch if not zero word instruction opcode in bits 0-8. During this
instruction, processing examines register RT's preferred slot and
branches to a target if the preferred slot value is not zero. The
address of the branch target is computed by appending two zero bits
to the value of instruction 3900's I16 field, extending it on the
left with copies of the most-significant bit, and adding it to the
value of the instruction counter such that: If RT.sup.0:3!=0 then
PC.rarw.(PC+RepLeftBit(I16.parallel.0b00)) & LSLR &
0xFFFFFFFC else PC.rarw.(PC+4) & LSLR End
[0225] FIG. 39B is a diagram showing an instruction that performs a
branch if zero halfword instruction. Instruction 3900 includes
branch if zero halfword instruction opcode in bits 0-8. During this
instruction, processing examines register RT's preferred slot and
branches to a target if the low-order half-word value in the
preferred slot is not zero. The address of the branch target is
computed by appending two zero bits to the value of instruction
3900's I16 field, extending it on the left with copies of the
most-significant bit, and adding it to the value of the instruction
counter such that: If RT.sup.2:3!=0 then
PC.rarw.(PC+RepLeftBit(I16.parallel.0b00)) & LSLR &
0xFFFFFFFC else PC.rarw.(PC+4) & LSLR End
[0226] FIG. 39C is a flowchart showing steps taken in performing
conditional branch instructions. Processing commences at 3920,
whereupon processing reads a vector register at step 3930. At step
3940, processing computes decision inputs and, at step 3950,
processing computes a decision under control of a decision width
indicator and/or a condition indicator.
[0227] Processing, at step 3960, computes a target address, and at
step 3970, processing transfers control if the decision indicates
by updating the IFAR. Processing ends at 3980.
[0228] FIG. 40 is a diagram showing an apparatus for executing a
conditional branch instruction. Instruction word register 4000
provides displacement formatting 4010, such as that which is
included in instruction 3900's or 3910's I16 field shown in FIGS.
39A and 39B, respectively. The displacement formatting is added to
instruction address register 4020, resulting in a branch address
that feeds into multiplexer 4060.
[0229] Vector 4030, which is received from a vector register file,
includes the value corresponding to the vector register file entry
specified by instruction 3900's or 3910's RT field shown in FIGS.
39A and 39B, respectively. Vector 4030's preferred slot is examined
via zero detect logic 4040, whose results are fed into branch
decision logic 4050. Branch decision logic 4050 uses the results to
determine whether to have multiplexer 4060 select between the
branch target address and the sequential next instruction address
generated by an address incrementer. Those skilled in the art will
appreciate that yet other decision logic may be used that
corresponds to the zero-detect logic of blocks 4040 and 4050 within
the scope of the present invention and that the exemplary
embodiment described herein is non-limiting.
[0230] Multiplexer 4060's output feeds into instruction fetch
address register (IFAR) 4070. When processing does not branch,
processing proceeds through loop 4075 whereupon processing
increments and processes the next instruction address. When
processing does branch, IFAR 4070's output feeds into address
formatting 4090, which is formatted using LSLR 4080. Once
formatted, the formatted address is passed to memory hierarchy.
[0231] In accordance with another aspect of the present embodiment,
at least one compare instruction is implemented. In accordance with
a preferred embodiment, at least one compare instruction operates
on a plurality of slot values that generate a data mask in each
slot that corresponds to "all 0" when the condition is not true,
and corresponds to "all 1" when the condition is true.
[0232] In accordance with another embodiment, the data mask vector
registered by the compare instruction feeds a select instruction.
In another embodiment, the data mask provides a condition input to
conditional branch instructions, such as those in accordance with
the instructions shown in FIGS. 39A and 39B.
[0233] In accordance with another embodiment, a minimal set of
branch instructions are implemented, such as a first "compare for
equality," and a second "compare for ordering" (e.g., "compare
greater than"). In this embodiment: [0234] "compare for not equal"
is performed by generating a code sequence for a false result of
the "compare for equality"; [0235] "compare A for less than B" is
implemented by generating a code sequence testing; [0236] "compare
B greater than A" is implemented by inverting the A and B operands
to the compare instruction; [0237] "compare A greater-or-equal B"
is implemented by generating code to test for a false result of
"compare B greater than A"; and [0238] "compare A less-or-equal B"
is implemented by generating code to test for a false result of
"compare A greater than B."
[0239] In at least one embodiment, two test for ordering, in
accordance with comparison of signed and unsigned numbers, are
provided.
[0240] While particular embodiments of the present invention have
been shown and described, it will be obvious to those skilled in
the art that, based upon the teachings herein, that changes and
modifications may be made without departing from this invention and
its broader aspects. Therefore, the appended claims are to
encompass within their scope all such changes and modifications as
are within the true spirit and scope of this invention.
Furthermore, it is to be understood that the invention is solely
defined by the appended claims. It will be understood by those with
skill in the art that if a specific number of an introduced claim
element is intended, such intent will be explicitly recited in the
claim, and in the absence of such recitation no such limitation is
present.
* * * * *