U.S. patent application number 10/959700 was filed with the patent office on 2005-06-02 for power management for processing modules.
This patent application is currently assigned to Sony Computer Entertainment Inc.. Invention is credited to Suzuoki, Masakazu, Yamazaki, Takeshi.
Application Number | 20050120254 10/959700 |
Document ID | / |
Family ID | 35453577 |
Filed Date | 2005-06-02 |
United States Patent
Application |
20050120254 |
Kind Code |
A1 |
Suzuoki, Masakazu ; et
al. |
June 2, 2005 |
Power management for processing modules
Abstract
A processing element (PE) includes a processing unit (PU) and a
number of attached processing units (APUs). The instruction set of
each APU is divided a priori into a number of types, each type
associated with a different amount of heat generation. Each APU
keeps track of the amount of each type of instruction executed over
a time period,--the power information,--and provides this power
information to the PU. The PU then performs power management as a
function of the provided power information from each APU,--such as
directing a particular APU to enter an idle state to reduce power
consumption.
Inventors: |
Suzuoki, Masakazu; (Tokyo,
JP) ; Yamazaki, Takeshi; (Tokyo, JP) |
Correspondence
Address: |
LERNER, DAVID, LITTENBERG,
KRUMHOLZ & MENTLIK
600 SOUTH AVENUE WEST
WESTFIELD
NJ
07090
US
|
Assignee: |
Sony Computer Entertainment
Inc.
Tokyo
JP
|
Family ID: |
35453577 |
Appl. No.: |
10/959700 |
Filed: |
October 5, 2004 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10959700 |
Oct 5, 2004 |
|
|
|
09816004 |
Mar 22, 2001 |
|
|
|
10959700 |
Oct 5, 2004 |
|
|
|
09815554 |
Mar 22, 2001 |
|
|
|
6826662 |
|
|
|
|
10959700 |
Oct 5, 2004 |
|
|
|
09816020 |
Mar 22, 2001 |
|
|
|
6526491 |
|
|
|
|
10959700 |
Oct 5, 2004 |
|
|
|
09815558 |
Mar 22, 2001 |
|
|
|
6809734 |
|
|
|
|
10959700 |
Oct 5, 2004 |
|
|
|
09816752 |
Mar 22, 2001 |
|
|
|
Current U.S.
Class: |
713/320 |
Current CPC
Class: |
G06F 1/3203 20130101;
Y02D 10/16 20180101; G06F 1/206 20130101; G06F 1/3243 20130101;
G06F 9/30145 20130101; Y02D 10/152 20180101; G06F 9/30167 20130101;
Y02D 10/00 20180101 |
Class at
Publication: |
713/320 |
International
Class: |
G06F 001/26 |
Claims
1. A method for performing power management, the method comprising
the steps of: monitoring a rate of execution of instructions by a
processor; and estimating a power consumption rate as a function of
the monitored instruction execution rate.
2. The method of claim 1 wherein the rate of execution is monitored
based on a rate of fetching instructions for execution, wherein the
instructions include instructions having different types.
3. The method of claim 2, wherein the estimating step estimates a
heat level for the processor as a function of instruction count
values for each of the different types of instruction being
executed.
4. The method of claim 2, wherein the different types of
instructions include a floating point instruction and an integer
instruction.
5. The method of claim 2, wherein the different types of
instructions include a vector floating point instruction, a vector
integer instruction, a scalar floating point instruction and a
scalar integer instruction.
6. The method of claim 1, wherein the estimating estimates a heat
level for the processor.
7. A method for performing power management, the method comprising
the steps of: determining power information based on a rate of
execution of instructions by a first processor; and estimating a
rate of power consumption as a function of the determined power
information.
8. The method of claim 7, wherein the instructions are of different
types, and the power information is determined by counting the
number of each of the respective types of instructions being
executed by the first processor.
9. The method of claim 8, wherein the different types of
instructions include a floating point instruction and an integer
instruction.
10. The method of claim 8, wherein the different types of
instructions include a vector floating point instruction, a vector
integer instruction, a scalar floating point instruction and a
scalar integer instruction.
11. The method of claim 7, further comprising sending the power
information to a second processor, wherein the estimating is
performed by the second processor.
12. The method of claim 11, wherein the second processor controls
the first processor to reduce energy usage if the estimated energy
usage is above a predefined level.
13. The method of claim 12, wherein the second processor puts the
first processor into an idle state.
14. Apparatus performing power management, the apparatus
comprising: a first processor; and a monitoring circuit operable to
generate power information based on a rate of execution of
instructions by the first processor.
15. The apparatus of claim 14, wherein the rate of execution is
represented by a rate of fetching instructions for execution, the
instructions include instructions having different types and the
power information includes counts of each of the different types of
instructions being fetched for execution.
16. The apparatus of claim 15, wherein the different types of
instructions include a floating point instruction and an integer
instruction.
17. The apparatus of claim 15, wherein the different types of
instructions include a vector floating point instruction, a vector
integer instruction, a scalar floating point instruction and a
scalar integer instruction.
18. The apparatus of claim 15, wherein the monitoring circuit
includes counters for maintaining the counts of each of the
different types of instructions.
19. The apparatus of claim 14, wherein the first processor is
operable to send the power information to a second processor, and
the second processor is operable to estimate a rate of power
consumption by the first processor.
20. The apparatus of claim 19, wherein the second processor is
operable to estimate a heat level corresponding to the estimated
rate of power consumption.
21. A processing element for performing power management, the
processing element comprising: a first processing unit; a number of
attached processing units, at least one attached processing unit
having a monitoring circuit operable to accumulate power
information related to a rate at which instructions are executed
therein; wherein the at least one attached processing unit is
operable to send the accumulated power information to the first
processing unit, and the first processing unit is operable to
determine a rate of power consumption from the accumulated power
information.
22. The processing element of claim 21, wherein the first
processing unit is operable to reduce an energy usage of the at
least one attached processing unit if the determined power
consumption for that attached processing unit is above a predefined
value.
23. The processing element of claim 21, wherein the first
processing unit is operable to reduce an energy usage of that
attached processing unit by causing that attached processing unit
to enter an idle state.
24. The processing element of claim 21, wherein the instructions
include instructions having different types, and wherein the
accumulated power information includes data representing counts for
how many instructions of the different types of instructions have
been executed.
25. The processing element of claim 24, wherein the different types
of instructions include a floating point instruction and an integer
instruction.
26. The processing element of claim 24, wherein the different types
of instructions include a vector floating point instruction, a
vector integer instruction, a scalar floating point instruction and
a scalar integer instruction.
27. The processing element of claim 21, wherein the first
processing unit is operable to estimate a heat level corresponding
to the determined rate of power consumption.
28. A processing environment comprising: a first processing unit; a
number of additional processing units each having a monitoring
circuit operable to generate power information based on a rate at
which instructions are executed by the respective additional
processing unit; wherein the additional processing units are
operable to send power information to the first processing unit,
the first processing unit being operable to monitor a rate of power
consumption of the additional processing units based on the sent
power information.
29. The processing environment of claim 28, wherein the first
processing unit reduces the rate of power consumption of at least
one of the attached processing units when the rate of power
consumption is above a predefined value.
30. The processing environment of claim 28, wherein the first
processing unit reduces the power consumption of the at least one
attached processing unit by causing that attached processing unit
to enter an idle state.
31. The processing environment of claim 28, wherein the
instructions include instructions having different types and the
accumulated power information includes data representing counts of
each of the different types of instructions that are executed.
32. The processing environment of claim 31, wherein the different
types of instructions include a floating point instruction and an
integer instruction.
33. The processing environment of claim 31, wherein the different
types of instructions include a vector floating point instruction,
a vector integer instruction, a scalar floating point instruction
and a scalar integer instruction.
34. The processing environment of claim 28, wherein the first
processing unit further estimates a heat level based on the
monitored rate of power consumption.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This is a continuation-in-part of the following copending,
commonly assigned, U.S. patent applications: "Computer Architecture
and Software Cells for Broadband Networks," application Ser. No.
09/816,004, filed Mar. 22, 2001; "System and Method for Data
Synchronization for a Computer Architecture for Broadband
Networks," application Ser. No. 09/815,554, filed Mar. 22, 2001;
"Memory Protection System and Method for Computer Architecture for
Broadband Networks," application Ser. No. 09/816,020, filed Mar.
22, 2001; "Resource Dedication System and Method for a Computer
Architecture for Broadband Networks," application Ser. No.
09/815,558, filed Mar. 22, 2001; and "Processing Modules for
Computer Architecture for Broadband Networks," application Ser. No.
09/816,752, filed Mar. 22, 2001; all of which are incorporated by
reference herein.
BACKGROUND OF THE INVENTION
[0002] The present invention relates to power management and, in
particular, to power management in a processing environment.
[0003] In a processing environment, e.g., a single processor-based
personal computer, there is the need to perform some type of power
management. The latter can cover a range of methods and techniques.
For example, power management can be concerned with heat
dissipation, or heat management, with respect to the processor
itself. As such, the use of a heat sink mounted on the
processor--to keep the processor within a particular temperature
range--is a form of power management. Similarly, monitoring the
voltage level of a battery (battery conservation) in, e.g., a
laptop computer, is yet another example of power management in a
processing environment.
[0004] In terms of heat management, other more complex schemes
exist. For example, temperature sensors can be placed on critical
circuit elements, such as the processor, and fans can be mounted in
an associated system enclosure. When the temperature sensors
indicate a particular temperature has been reached, the fans turn
on, increasing the air flow through the system enclosure for
cooling down the processor. Alternatively, an alarm could be
generated which causes the processing environment to begin a
shutdown when the temperature sensors indicate that a predefined
temperature level has been exceeded--i.e., that the system is
overheating.
SUMMARY OF THE INVENTION
[0005] As processors become more complex--whether in terms of size
and/or speed--power management through the use of temperature
sensors may not provide a complete solution (indeed, in some
situations the use of temperature sensors may even be inelegant,
expensive and clumsy). As such, we have observed that the amount of
heat generated by a processor is directly proportional to the type
of instructions that the processor is executing, e.g., some
instructions use more of the processor than other instructions.
Therefore, and in accordance with the invention, a processing
environment performs power management by monitoring the number and
type of processor accesses and estimating an energy usage as a
function thereof.
[0006] In an embodiment of the invention, a processor monitors the
number and type of instructions fetches over a time period. The
instruction set of a processor is divided into at least two types
of instructions, each type associated with a different heat level.
A heat level is then calculated as a function of the number of each
type of instruction executed over the time interval.
[0007] In another embodiment, a processing element (PE) comprises a
processing unit (PU) and a number of attached processing units
(APUs). The instruction set of each APU is a priori divided into a
number of types, each type associated with a different amount of
heat generation. Each APU keeps track of the amount of each type of
instruction--the power information--executed over a time period and
provides this power information to the PU. The PU then performs
power management as a function of the provided power information
from each APU. For example, the PU may direct that a particular APU
enter an idle state to reduce power consumption.
DESCRIPTION OF THE DRAWINGS
[0008] FIG. 1 illustrates the overall architecture of a computer
network in accordance with the present invention.
[0009] FIG. 2 is a diagram illustrating the structure of a
processor element (PE) in accordance with the present
invention.
[0010] FIG. 3 is a diagram illustrating the structure of a
broadband engine (BE) in accordance with the present invention.
[0011] FIG. 4 is a diagram illustrating the structure of an
attached processing unit (APU) in accordance with the present
invention.
[0012] FIG. 5 is a diagram illustrating the structure of a
processor element, visualizer (VS) and an optical interface in
accordance with the present invention.
[0013] FIG. 6 is a diagram illustrating one combination of
processor elements in accordance with the present invention.
[0014] FIG. 7 illustrates another combination of processor elements
in accordance with the present invention.
[0015] FIG. 8 illustrates yet another combination of processor
elements in accordance with the present invention.
[0016] FIG. 9 illustrates yet another combination of processor
elements in accordance with the present invention.
[0017] FIG. 10 illustrates yet another combination of processor
elements in accordance with the present invention.
[0018] FIG. 11A illustrates the integration of optical interfaces
within a chip package in accordance with the present invention.
[0019] FIG. 11B is a diagram of one configuration of processors
using the optical interfaces of FIG. 11A.
[0020] FIG. 11C is a diagram of another configuration of processors
using the optical interfaces of FIG. 11A.
[0021] FIG. 12A illustrates the structure of a memory system in
accordance with the present invention.
[0022] FIG. 12B illustrates the writing of data from a first
broadband engine to a second broadband engine in accordance with
the present invention.
[0023] FIG. 13 is a diagram of the structure of a shared memory for
a processor element in accordance with the present invention.
[0024] FIG. 14A illustrates one structure for a bank of the memory
shown in FIG. 13.
[0025] FIG. 14B illustrates another structure for a bank of the
memory shown in FIG. 13.
[0026] FIG. 15 illustrates a structure for a direct memory access
controller in accordance with the present invention.
[0027] FIG. 16 illustrates an alternative structure for a direct
memory access controller in accordance with the present
invention.
[0028] FIGS. 17A-17O illustrate the operation of data
synchronization in accordance with the present invention.
[0029] FIG. 18 is a three-state memory diagram illustrating the
various states of a memory location in accordance with the data
synchronization scheme of the present invention.
[0030] FIG. 19 illustrates the structure of a key control table for
a hardware sandbox in accordance with the present invention.
[0031] FIG. 20 illustrates a scheme for storing memory access keys
for a hardware sandbox in accordance with the present
invention.
[0032] FIG. 21 illustrates the structure of a memory access control
table for a hardware sandbox in accordance with the present
invention.
[0033] FIG. 22 is a flow diagram of the steps for accessing a
memory sandbox using the key control table of FIG. 19 and the
memory access control table of FIG. 21.
[0034] FIG. 23 illustrates the structure of a software cell in
accordance with the present invention.
[0035] FIG. 24 is a flow diagram of the steps for issuing remote
procedure calls to APUs in accordance with the present
invention.
[0036] FIG. 25 illustrates the structure of a dedicated pipeline
for processing streaming data in accordance with the present
invention.
[0037] FIG. 26 is a flow diagram of the steps performed by the
dedicated pipeline of FIG. 25 in the processing of streaming data
in accordance with the present invention.
[0038] FIG. 27 illustrates an alternative structure for a dedicated
pipeline for the processing of streaming data in accordance with
the present invention.
[0039] FIG. 28 illustrates a scheme for an absolute timer for
coordinating the parallel processing of applications and data by
APUs in accordance with the present invention.
[0040] FIG. 29 shows an illustrative embodiment for performing
power management in accordance with the principles of the
invention.
[0041] FIG. 30 shows an illustrative flow diagram in accordance
with the principles of the invention.
[0042] FIG. 31 shows another illustrative embodiment for performing
power management in accordance with the principles of the
invention.
[0043] FIG. 32 shows an illustrative embodiment of an attached
processor unit in accordance with the principles of the
invention;
[0044] FIG. 33 shows an illustrative flow diagram for use in the
embodiment of FIG. 31.
[0045] FIG. 34 shows another illustrative embodiment of a
processing environment in accordance with the principles of the
invention.
[0046] FIG. 35 shows an illustrative flow diagram for use in the
embodiment of FIG. 34.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0047] The overall architecture for a computer system 101 in
accordance with the present invention is shown in FIG. 1.
[0048] As illustrated in this figure, system 101 includes network
104 to which is connected a plurality of computers and computing
devices. Network 104 can be a LAN, a global network, such as the
Internet, or any other computer network.
[0049] The computers and computing devices connected to network 104
(the network's "members") include, e.g., client computers 106,
server computers 108, personal digital assistants (PDAs) 110,
digital television (DTV) 112 and other wired or wireless computers
and computing devices. The processors employed by the members of
network 104 are constructed from the same common computing module.
These processors also preferably all have the same ISA and perform
processing in accordance with the same instruction set. The number
of modules included within any particular processor depends upon
the processing power required by that processor.
[0050] For example, since servers 108 of system 101 perform more
processing of data and applications than clients 106, servers 108
contain more computing modules than clients 106. PDAs 110, on the
other hand, perform the least amount of processing. PDAs 110,
therefore, contain the smallest number of computing modules. DTV
112 performs a level of processing between that of clients 106 and
servers 108. DTV 112, therefore, contains a number of computing
modules between that of clients 106 and servers 108. As discussed
below, each computing module contains a processing controller and a
plurality of identical processing units for performing parallel
processing of the data and applications transmitted over network
104.
[0051] This homogeneous configuration for system 101 facilitates
adaptability, processing speed and processing efficiency. Because
each member of system 101 performs processing using one or more (or
some fraction) of the same computing module, the particular
computer or computing device performing the actual processing of
data and applications is unimportant. The processing of a
particular application and data, moreover, can be shared among the
network's members. By uniquely identifying the cells comprising the
data and applications processed by system 101 throughout the
system, the processing results can be transmitted to the computer
or computing device requesting the processing regardless of where
this processing occurred. Because the modules performing this
processing have a common structure and employ a common ISA, the
computational burdens of an added layer of software to achieve
compatibility among the processors is avoided. This architecture
and programming model facilitates the processing speed necessary to
execute, e.g., real-time, multimedia applications.
[0052] To take further advantage of the processing speeds and
efficiencies facilitated by system 101, the data and applications
processed by this system are packaged into uniquely identified,
uniformly formatted software cells 102. Each software cell 102
contains, or can contain, both applications and data. Each software
cell also contains an ID to globally identify the cell throughout
network 104 and system 101. This uniformity of structure for the
software cells, and the software cells' unique identification
throughout the network, facilitates the processing of applications
and data on any computer or computing device of the network. For
example, a client 106 may formulate a software cell 102 but,
because of the limited processing capabilities of client 106,
transmit this software cell to a server 108 for processing.
Software cells can migrate, therefore, throughout network 104 for
processing on the basis of the availability of processing resources
on the network.
[0053] The homogeneous structure of processors and software cells
of system 101 also avoids many of the problems of today's
heterogeneous networks. For example, inefficient programming models
which seek to permit processing of applications on any ISA using
any instruction set, e.g., virtual machines such as the Java
virtual machine, are avoided. System 101, therefore, can implement
broadband processing far more effectively and efficiently than
today's networks.
[0054] The basic processing module for all members of network 104
is the processor element (PE). FIG. 2 illustrates the structure of
a PE. As shown in this figure, PE 201 comprises a processing unit
(PU) 203, a direct memory access controller (DMAC) 205 and a
plurality of attached processing units (APUs), namely, APU 207, APU
209, APU 211, APU 213, APU 215, APU 217, APU 219 and APU 221. A
local PE bus 223 transmits data and applications among the APUs,
DMAC 205 and PU 203. Local PE bus 223 can have, e.g., a
conventional architecture or be implemented as a packet switch
network. Implementation as a packet switch network, while requiring
more hardware, increases available bandwidth.
[0055] PE 201 can be constructed using various methods for
implementing digital logic. PE 201 preferably is constructed,
however, as a single integrated circuit employing a complementary
metal oxide semiconductor (CMOS) on a silicon substrate.
Alternative materials for substrates include gallium arsinide,
gallium aluminum arsinide and other so-called III-B compounds
employing a wide variety of dopants. PE 201 also could be
implemented using superconducting material, e.g., rapid
single-flux-quantum (RSFQ) logic.
[0056] PE 201 is closely associated with a dynamic random access
memory (DRAM) 225 through a high bandwidth memory connection 227.
DRAM 225 functions as the main memory for PE 201. Although a DRAM
225 preferably is a dynamic random access memory, DRAM 225 could be
implemented using other means, e.g., as a static random access
memory (SRAM), a magnetic random access memory (MRAM), an optical
memory or a holographic memory. DMAC 205 facilitates the transfer
of data between DRAM 225 and the APUs and PU of PE 201. As further
discussed below, DMAC 205 designates for each APU an exclusive area
in DRAM 225 into which only the APU can write data and from which
only the APU can read data. This exclusive area is designated a
"sandbox."
[0057] PU 203 can be, e.g., a standard processor capable of
stand-alone processing of data and applications. In operation, PU
203 schedules and orchestrates the processing of data and
applications by the APUs. The APUs preferably are single
instruction, multiple data (SIMD) processors. Under the control of
PU 203, the APUs perform the processing of these data and
applications in a parallel and independent manner. DMAC 205
controls accesses by PU 203 and the APUs to the data and
applications stored in the shared DRAM 225. Although PE 201
preferably includes eight APUs, a greater or lesser number of APUs
can be employed in a PE depending upon the processing power
required. The PU 203 and some or all of the APUs may have the same
hardware structure and/or functionality. Individual processors may
be configured as controlling or controlled processors, if
necessary, by software. For instance, in FIG. 3, the PE 201 may
include nine processors having the same architecture. One of the
nine processors may be designated as a controlling processor (e.g.,
PU 203) and the remaining processors may be designated as
controlled processors (e.g., APUs 207, 209, 211, 213, 215, 217, 219
and 221). Also, a number of PEs, such as PE 201, may be joined or
packaged together to provide enhanced processing power.
[0058] For example, as shown in FIG. 3, four PEs may be packaged or
joined together, e.g., within one or more chip packages, to form a
single processor for a member of network 104. This configuration is
designated a broadband engine (BE). As shown in FIG. 3, BE 301
contains four PEs, namely, PE 303, PE 305, PE 307 and PE 309.
Communications among these PEs are over BE bus 311. Broad bandwidth
memory connection 313 provides communication between shared DRAM
315 and these PEs. In lieu of BE bus 311, communications among the
PEs of BE 301 can occur through DRAM 315 and this memory
connection.
[0059] Input/output (I/O) interface 317 and external bus 319
provide communications between broadband engine 301 and the other
members of network 104. Each PE of BE 301 performs processing of
data and applications in a parallel and independent manner
analogous to the parallel and independent processing of
applications and data performed by the APUs of a PE.
[0060] FIG. 4 illustrates the structure of an APU. APU 402 includes
local memory 406, registers 410, four floating point units 412 and
four integer units 414. Again, however, depending upon the
processing power required, a greater or lesser number of floating
points units 512 and integer units 414 can be employed. In a
preferred embodiment, local memory 406 contains 128 kilobytes of
storage, and the capacity of registers 410 is 128.times.128 bits.
Floating point units 412 preferably operate at a speed of 32
billion floating point operations per second (32 GFLOPS), and
integer units 414 preferably operate at a speed of 32 billion
operations per second (32 GOPS).
[0061] Local memory 402 is not a cache memory. Local memory 402 is
preferably constructed as an SRAM. Cache coherency support for an
APU is unnecessary. A PU may require cache coherency support for
direct memory accesses initiated by the PU. Cache coherency support
is not required, however, for direct memory accesses initiated by
an APU or for accesses from and to external devices.
[0062] APU 402 further includes bus 404 for transmitting
applications and data to and from the APU. In a preferred
embodiment, this bus is 1,024 bits wide. APU 402 further includes
internal busses 408, 420 and 418. In a preferred embodiment, bus
408 has a width of 256 bits and provides communications between
local memory 406 and registers 410. Busses 420 and 418 provide
communications between, respectively, registers 410 and floating
point units 412, and registers 410 and integer units 414. In a
preferred embodiment, the width of busses 418 and 420 from
registers 410 to the floating point or integer units is 384 bits,
and the width of busses 418 and 420 from the floating point or
integer units to registers 410 is 128 bits. The larger width of
these busses from registers 410 to the floating point or integer
units than from these units to registers 410 accommodates the
larger data flow from registers 410 during processing. A maximum of
three words are needed for each calculation. The result of each
calculation, however, normally is only one word.
[0063] FIGS. 5-10 further illustrate the modular structure of the
processors of the members of network 104. For example, as shown in
FIG. 5, a processor may comprise a single PE 502. As discussed
above, this PE typically comprises a PU, DMAC and eight APUs. Each
APU includes local storage (LS). On the other hand, a processor may
comprise the structure of visualizer (VS) 505. As shown in FIG. 5,
VS 505 comprises PU 512, DMAC 514 and four APUs, namely, APU 516,
APU 518, APU 520 and APU 522. The space within the chip package
normally occupied by the other four APUs of a PE is occupied in
this case by pixel engine 508, image cache 510 and cathode ray tube
controller (CRTC) 504. Depending upon the speed of communications
required for PE 502 or VS 505, optical interface 506 also may be
included on the chip package.
[0064] Using this standardized, modular structure, numerous other
variations of processors can be constructed easily and efficiently.
For example, the processor shown in FIG. 6 comprises two chip
packages, namely, chip package 602 comprising a BE and chip package
604 comprising four VSs. Input/output (I/O) 606 provides an
interface between the BE of chip package 602 and network 104. Bus
608 provides communications between chip package 602 and chip
package 604. Input output processor (IOP) 610 controls the flow of
data into and out of I/O 606. I/O 606 may be fabricated as an
application specific integrated circuit (ASIC). The output from the
VSs is video signal 612.
[0065] FIG. 7 illustrates a chip package for a BE 702 with two
optical interfaces 704 and 706 for providing ultra high speed
communications to the other members of network 104 (or other chip
packages locally connected). BE 702 can function as, e.g., a server
on network 104.
[0066] The chip package of FIG. 8 comprises two PEs 802 and 804 and
two VSs 806 and 808. An I/O 810 provides an interface between the
chip package and network 104. The output from the chip package is a
video signal. This configuration may function as, e.g., a graphics
work station.
[0067] FIG. 9 illustrates yet another configuration. This
configuration contains one-half of the processing power of the
configuration illustrated in FIG. 8. Instead of two PEs, one PE 902
is provided, and instead of two VSs, one VS 904 is provided. I/O
906 has one-half the bandwidth of the I/O illustrated in FIG. 8.
Such a processor also may function, however, as a graphics work
station.
[0068] A final configuration is shown in FIG. 10. This processor
consists of only a single VS 1002 and an I/O 1004. This
configuration may function as, e.g., a PDA.
[0069] FIG. 11A illustrates the integration of optical interfaces
into a chip package of a processor of network 104. These optical
interfaces convert optical signals to electrical signals and
electrical signals to optical signals and can be constructed from a
variety of materials including, e.g., gallium arsinide, aluminum
gallium arsinide, germanium and other elements or compounds. As
shown in this figure, optical interfaces 1104 and 1106 are
fabricated on the chip package of BE 1102. BE bus 1108 provides
communication among the PEs of BE 1102, namely, PE 1110, PE 1112,
PE 1114, PE 1116, and these optical interfaces. Optical interface
1104 includes two ports, namely, port 1118 and port 1120, and
optical interface 1106 also includes two ports, namely, port 1122
and port 1124. Ports 1118, 1120, 1122 and 1124 are connected to,
respectively, optical wave guides 1126, 1128, 1130 and 1132.
Optical signals are transmitted to and from BE 1102 through these
optical wave guides via the ports of optical interfaces 1104 and
1106.
[0070] A plurality of BEs can be connected together in various
configurations using such optical wave guides and the four optical
ports of each BE. For example, as shown in FIG. 11B, two or more
BEs, e.g., BE 1152, BE 1154 and BE 1156, can be connected serially
through such optical ports. In this example, optical interface 1166
of BE 1152 is connected through its optical ports to the optical
ports of optical interface 1160 of BE 1154. In a similar manner,
the optical ports of optical interface 1162 on BE 1154 are
connected to the optical ports of optical interface 1164 of BE
1156.
[0071] A matrix configuration is illustrated in FIG. 1C. In this
configuration, the optical interface of each BE is connected to two
other BEs. As shown in this figure, one of the optical ports of
optical interface 1188 of BE 1172 is connected to an optical port
of optical interface 1182 of BE 1176. The other optical port of
optical interface 1188 is connected to an optical port of optical
interface 1184 of BE 1178. In a similar manner, one optical port of
optical interface 1190 of BE 1174 is connected to the other optical
port of optical interface 1184 of BE 1178. The other optical port
of optical interface 1190 is connected to an optical port of
optical interface 1186 of BE 1180. This matrix configuration can be
extended in a similar manner to other BEs.
[0072] Using either a serial configuration or a matrix
configuration, a processor for network 104 can be constructed of
any desired size and power. Of course, additional ports can be
added to the optical interfaces of the BEs, or to processors having
a greater or lesser number of PEs than a BE, to form other
configurations.
[0073] FIG. 12A illustrates the control system and structure for
the DRAM of a BE. A similar control system and structure is
employed in processors having other sizes and containing more or
less PEs. As shown in this figure, a cross-bar switch connects each
DMAC 1210 of the four PEs comprising BE 1201 to eight bank controls
1206. Each bank control 1206 controls eight banks 1208 (only four
are shown in the figure) of DRAM 1204. DRAM 1204, therefore,
comprises a total of sixty-four banks. In a preferred embodiment,
DRAM 1204 has a capacity of 64 megabytes, and each bank has a
capacity of 1 megabyte. The smallest addressable unit within each
bank, in this preferred embodiment, is a block of 1024 bits.
[0074] BE 1201 also includes switch unit 1212. Switch unit 1212
enables other APUs on BEs closely coupled to BE 1201 to access DRAM
1204. A second BE, therefore, can be closely coupled to a first BE,
and each APU of each BE can address twice the number of memory
locations normally accessible to an APU. The direct reading or
writing of data from or to the DRAM of a first BE from or to the
DRAM of a second BE can occur through a switch unit such as switch
unit 1212.
[0075] For example, as shown in FIG. 12B, to accomplish such
writing, the APU of a first BE, e.g., APU 1220 of BE 1222, issues a
write command to a memory location of a DRAM of a second BE, e.g.,
DRAM 1228 of BE 1226 (rather than, as in the usual case, to DRAM
1224 of BE 1222). DMAC 1230 of BE 1222 sends the write command
through cross-bar switch 1221 to bank control 1234, and bank
control 1234 transmits the command to an external port 1232
connected to bank control 1234. DMAC 1238 of BE 1226 receives the
write command and transfers this command to switch unit 1240 of BE
1226. Switch unit 1240 identifies the DRAM address contained in the
write command and sends the data for storage in this address
through bank control 1242 of BE 1226 to bank 1244 of DRAM 1228.
Switch unit 1240, therefore, enables both DRAM 1224 and DRAM 1228
to function as a single memory space for the APUs of BE 1222.
[0076] FIG. 13 shows the configuration of the sixty-four banks of a
DRAM. These banks are arranged into eight rows, namely, rows 1302,
1304, 1306, 1308, 1310, 1312, 1314 and 1316 and eight columns,
namely, columns 1320, 1322, 1324, 1326, 1328, 1330, 1332 and 1334.
Each row is controlled by a bank controller. Each bank controller,
therefore, controls eight megabytes of memory.
[0077] FIGS. 14A and 14B illustrate different configurations for
storing and accessing the smallest addressable memory unit of a
DRAM, e.g., a block of 1024 bits. In FIG. 14A, DMAC 1402 stores in
a single bank 1404 eight 1024 bit blocks 1406. In FIG. 14B, on the
other hand, while DMAC 1412 reads and writes blocks of data
containing 1024 bits, these blocks are interleaved between two
banks, namely, bank 1414 and bank 1416. Each of these banks,
therefore, contains sixteen blocks of data, and each block of data
contains 512 bits. This interleaving can facilitate faster
accessing of the DRAM and is useful in the processing of certain
applications.
[0078] FIG. 15 illustrates the architecture for a DMAC 1504 within
a PE. As illustrated in this figure, the structural hardware
comprising DMAC 1506 is distributed throughout the PE such that
each APU 1502 has direct access to a structural node 1504 of DMAC
1506. Each node executes the logic appropriate for memory accesses
by the APU to which the node has direct access.
[0079] FIG. 16 shows an alternative embodiment of the DMAC, namely,
a non-distributed architecture. In this case, the structural
hardware of DMAC 1606 is centralized. APUs 1602 and PU 1604
communicate with DMAC 1606 via local PE bus 1607. DMAC 1606 is
connected through a cross-bar switch to a bus 1608. Bus 1608 is
connected to DRAM 1610.
[0080] As discussed above, all of the multiple APUs of a PE can
independently access data in the shared DRAM. As a result, a first
APU could be operating upon particular data in its local storage at
a time during which a second APU requests these data. If the data
were provided to the second APU at that time from the shared DRAM,
the data could be invalid because of the first APU's ongoing
processing which could change the data's value. If the second
processor received the data from the shared DRAM at that time,
therefore, the second processor could generate an erroneous result.
For example, the data could be a specific value for a global
variable. If the first processor changed that value during its
processing, the second processor would receive an outdated value. A
scheme is necessary, therefore, to synchronize the APUs' reading
and writing of data from and to memory locations within the shared
DRAM. This scheme must prevent the reading of data from a memory
location upon which another APU currently is operating in its local
storage and, therefore, which are not current, and the writing of
data into a memory location storing current data.
[0081] To overcome these problems, for each addressable memory
location of the DRAM, an additional segment of memory is allocated
in the DRAM for storing status information relating to the data
stored in the memory location. This status information includes a
full/empty (F/E) bit, the identification of an APU (APU ID)
requesting data from the memory location and the address of the
APU's local storage (LS address) to which the requested data should
be read. An addressable memory location of the DRAM can be of any
size. In a preferred embodiment, this size is 1024 bits.
[0082] The setting of the F/E bit to 1 indicates that the data
stored in the associated memory location are current. The setting
of the F/E bit to 0, on the other hand, indicates that the data
stored in the associated memory location are not current. If an APU
requests the data when this bit is set to 0, the APU is prevented
from immediately reading the data. In this case, an APU ID
identifying the APU requesting the data, and an LS address
identifying the memory location within the local storage of this
APU to which the data are to be read when the data become current,
are entered into the additional memory segment.
[0083] An additional memory segment also is allocated for each
memory location within the local storage of the APUs. This
additional memory segment stores one bit, designated the "busy
bit." The busy bit is used to reserve the associated LS memory
location for the storage of specific data to be retrieved from the
DRAM. If the busy bit is set to 1 for a particular memory location
in local storage, the APU can use this memory location only for the
writing of these specific data. On the other hand, if the busy bit
is set to 0 for a particular memory location in local storage, the
APU can use this memory location for the writing of any data.
[0084] Examples of the manner in which the F/E bit, the APU ID, the
LS address and the busy bit are used to synchronize the reading and
writing of data from and to the shared DRAM of a PE are illustrated
in FIGS. 17A-17O.
[0085] As shown in FIG. 17A, one or more PEs, e.g., PE 1720,
interact with DRAM 1702. PE 1720 includes APU 1722 and APU 1740.
APU 1722 includes control logic 1724, and APU 1740 includes control
logic 1742. APU 1722 also includes local storage 1726. This local
storage includes a plurality of addressable memory locations 1728.
APU 1740 includes local storage 1744, and this local storage also
includes a plurality of addressable memory locations 1746. All of
these addressable memory locations preferably are 1024 bits in
size.
[0086] An additional segment of memory is associated with each LS
addressable memory location. For example, memory segments 1729 and
1734 are associated with, respectively, local memory locations 1731
and 1732, and memory segment 1752 is associated with local memory
location 1750. A "busy bit," as discussed above, is stored in each
of these additional memory segments. Local memory location 1732 is
shown with several Xs to indicate that this location contains
data.
[0087] DRAM 1702 contains a plurality of addressable memory
locations 1704, including memory locations 1706 and 1708. These
memory locations preferably also are 1024 bits in size. An
additional segment of memory also is associated with each of these
memory locations. For example, additional memory segment 1760 is
associated with memory location 1706, and additional memory segment
1762 is associated with memory location 1708. Status information
relating to the data stored in each memory location is stored in
the memory segment associated with the memory location. This status
information includes, as discussed above, the F/E bit, the APU ID
and the LS address. For example, for memory location 1708, this
status information includes F/E bit 1712, APU ID 1714 and LS
address 1716.
[0088] Using the status information and the busy bit, the
synchronized reading and writing of data from and to the shared
DRAM among the APUs of a PE, or a group of PEs, can be
achieved.
[0089] FIG. 17B illustrates the initiation of the synchronized
writing of data from LS memory location 1732 of APU 1722 to memory
location 1708 of DRAM 1702. Control 1724 of APU 1722 initiates the
synchronized writing of these data. Since memory location 1708 is
empty, F/E bit 1712 is set to 0. As a result, the data in LS
location 1732 can be written into memory location 1708. If this bit
were set to 1 to indicate that memory location 1708 is full and
contains current, valid data, on the other hand, control 1722 would
receive an error message and be prohibited from writing data into
this memory location.
[0090] The result of the successful synchronized writing of the
data into memory location 1708 is shown in FIG. 17C. The written
data are stored in memory location 1708, and F/E bit 1712 is set to
1. This setting indicates that memory location 1708 is full and
that the data in this memory location are current and valid.
[0091] FIG. 17D illustrates the initiation of the synchronized
reading of data from memory location 1708 of DRAM 1702 to LS memory
location 1750 of local storage 1744. To initiate this reading, the
busy bit in memory segment 1752 of LS memory location 1750 is set
to 1 to reserve this memory location for these data. The setting of
this busy bit to 1 prevents APU 1740 from storing other data in
this memory location.
[0092] As shown in FIG. 17E, control logic 1742 next issues a
synchronize read command for memory location 1708 of DRAM 1702.
Since F/E bit 1712 associated with this memory location is set to
1, the data stored in memory location 1708 are considered current
and valid. As a result, in preparation for transferring the data
from memory location 1708 to LS memory location 1750, F/E bit 1712
is set to 0. This setting is shown in FIG. 17F. The setting of this
bit to 0 indicates that, following the reading of these data, the
data in memory location 1708 will be invalid.
[0093] As shown in FIG. 17G, the data within memory location 1708
next are read from memory location 1708 to LS memory location 1750.
FIG. 17H shows the final state. A copy of the data in memory
location 1708 is stored in LS memory location 1750. F/E bit 1712 is
set to 0 to indicate that the data in memory location 1708 are
invalid. This invalidity is the result of alterations to these data
to be made by APU 1740. The busy bit in memory segment 1752 also is
set to 0. This setting indicates that LS memory location 1750 now
is available to APU 1740 for any purpose, i.e., this LS memory
location no longer is in a reserved state waiting for the receipt
of specific data. LS memory location 1750, therefore, now can be
accessed by APU 1740 for any purpose.
[0094] FIGS. 17I-17O illustrate the synchronized reading of data
from a memory location of DRAM 1702, e.g., memory location 1708, to
an LS memory location of an APU's local storage, e.g., LS memory
location 1752 of local storage 1744, when the F/E bit for the
memory location of DRAM 1702 is set to 0 to indicate that the data
in this memory location are not current or valid. As shown in FIG.
17I, to initiate this transfer, the busy bit in memory segment 1752
of LS memory location 1750 is set to 1 to reserve this LS memory
location for this transfer of data. As shown in FIG. 17J, control
logic 1742 next issues a synchronize read command for memory
location 1708 of DRAM 1702. Since the F/E bit associated with this
memory location, F/E bit 1712, is set to 0, the data stored in
memory location 1708 are invalid. As a result, a signal is
transmitted to control logic 1742 to block the immediate reading of
data from this memory location.
[0095] As shown in FIG. 17K, the APU ID 1714 and LS address 1716
for this read command next are written into memory segment 1762. In
this case, the APU ID for APU 1740 and the LS memory location for
LS memory location 1750 are written into memory segment 1762. When
the data within memory location 1708 become current, therefore,
this APU ID and LS memory location are used for determining the
location to which the current data are to be transmitted.
[0096] The data in memory location 1708 become valid and current
when an APU writes data into this memory location. The synchronized
writing of data into memory location 1708 from, e.g., memory
location 1732 of APU 1722, is illustrated in FIG. 17L. This
synchronized writing of these data is permitted because F/E bit
1712 for this memory location is set to 0.
[0097] As shown in FIG. 17M, following this writing, the data in
memory location 1708 become current and valid. APU ID 1714 and LS
address 1716 from memory segment 1762, therefore, immediately are
read from memory segment 1762, and this information then is deleted
from this segment. F/E bit 1712 also is set to 0 in anticipation of
the immediate reading of the data in memory location 1708. As shown
in FIG. 17N, upon reading APU ID 1714 and LS address 1716, this
information immediately is used for reading the valid data in
memory location 1708 to LS memory location 1750 of APU 1740. The
final state is shown in FIG. 170. This figure shows the valid data
from memory location 1708 copied to memory location 1750, the busy
bit in memory segment 1752 set to 0 and F/E bit 1712 in memory
segment 1762 set to 0. The setting of this busy bit to 0 enables LS
memory location 1750 now to be accessed by APU 1740 for any
purpose. The setting of this F/E bit to 0 indicates that the data
in memory location 1708 no longer are current and valid.
[0098] FIG. 18 summarizes the operations described above and the
various states of a memory location of the DRAM based upon the
states of the F/E bit, the APU ID and the LS address stored in the
memory segment corresponding to the memory location. The memory
location can have three states. These three states are an empty
state 1880 in which the F/E bit is set to 0 and no information is
provided for the APU ID or the LS address, a full state 1882 in
which the F/E bit is set to 1 and no information is provided for
the APU ID or LS address and a blocking state 1884 in which the F/E
bit is set to 0 and information is provided for the APU ID and LS
address.
[0099] As shown in this figure, in empty state 1880, a synchronized
writing operation is permitted and results in a transition to full
state 1882. A synchronized reading operation, however, results in a
transition to the blocking state 1884 because the data in the
memory location, when the memory location is in the empty state,
are not current.
[0100] In full state 1882, a synchronized reading operation is
permitted and results in a transition to empty state 1880. On the
other hand, a synchronized writing operation in full state 1882 is
prohibited to prevent overwriting of valid data. If such a writing
operation is attempted in this state, no state change occurs and an
error message is transmitted to the APU's corresponding control
logic.
[0101] In blocking state 1884, the synchronized writing of data
into the memory location is permitted and results in a transition
to empty state 1880. On the other hand, a synchronized reading
operation in blocking state 1884 is prohibited to prevent a
conflict with the earlier synchronized reading operation which
resulted in this state. If a synchronized reading operation is
attempted in blocking state 1884, no state change occurs and an
error message is transmitted to the APU's corresponding control
logic.
[0102] The scheme described above for the synchronized reading and
writing of data from and to the shared DRAM also can be used for
eliminating the computational resources normally dedicated by a
processor for reading data from, and writing data to, external
devices. This input/output (I/O) function could be performed by a
PU. However, using a modification of this synchronization scheme,
an APU running an appropriate program can perform this function.
For example, using this scheme, a PU receiving an interrupt request
for the transmission of data from an I/O interface initiated by an
external device can delegate the handling of this request to this
APU. The APU then issues a synchronize write command to the I/O
interface. This interface in turn signals the external device that
data now can be written into the DRAM. The APU next issues a
synchronize read command to the DRAM to set the DRAM's relevant
memory space into a blocking state. The APU also sets to 1 the busy
bits for the memory locations of the APU's local storage needed to
receive the data. In the blocking state, the additional memory
segments associated with the DRAM's relevant memory space contain
the APU's ID and the address of the relevant memory locations of
the APU's local storage. The external device next issues a
synchronize write command to write the data directly to the DRAM's
relevant memory space. Since this memory space is in the blocking
state, the data are immediately read out of this space into the
memory locations of the APU's local storage identified in the
additional memory segments. The busy bits for these memory
locations then are set to 0. When the external device completes
writing of the data, the APU issues a signal to the PU that the
transmission is complete.
[0103] Using this scheme, therefore, data transfers from external
devices can be processed with minimal computational load on the PU.
The APU delegated this function, however, should be able to issue
an interrupt request to the PU, and the external device should have
direct access to the DRAM.
[0104] The DRAM of each PE includes a plurality of "sandboxes." A
sandbox defines an area of the shared DRAM beyond which a
particular APU, or set of APUs, cannot read or write data. These
sandboxes provide security against the corruption of data being
processed by one APU by data being processed by another APU. These
sandboxes also permit the downloading of software cells from
network 104 into a particular sandbox without the possibility of
the software cell corrupting data throughout the DRAM. In the
present invention, the sandboxes are implemented in the hardware of
the DRAMs and DMACs. By implementing these sandboxes in this
hardware rather than in software, advantages in speed and security
are obtained.
[0105] The PU of a PE controls the sandboxes assigned to the APUs.
Since the PU normally operates only trusted programs, such as an
operating system, this scheme does not jeopardize security. In
accordance with this scheme, the PU builds and maintains a key
control table. This key control table is illustrated in FIG. 19. As
shown in this figure, each entry in key control table 1902 contains
an identification (ID) 1904 for an APU, an APU key 1906 for that
APU and a key mask 1908. The use of this key mask is explained
below. Key control table 1902 preferably is stored in a relatively
fast memory, such as a static random access memory (SRAM), and is
associated with the DMAC. The entries in key control table 1902 are
controlled by the PU. When an APU requests the writing of data to,
or the reading of data from, a particular storage location of the
DRAM, the DMAC evaluates the APU key 1906 assigned to that APU in
key control table 1902 against a memory access key associated with
that storage location.
[0106] As shown in FIG. 20, a dedicated memory segment 2010 is
assigned to each addressable storage location 2006 of a DRAM 2002.
A memory access key 2012 for the storage location is stored in this
dedicated memory segment. As discussed above, a further additional
dedicated memory segment 2008, also associated with each
addressable storage location 2006, stores synchronization
information for writing data to, and reading data from, the storage
location.
[0107] In operation, an APU issues a DMA command to the DMAC. This
command includes the address of a storage location 2006 of DRAM
2002. Before executing this command, the DMAC looks up the
requesting APU's key 1906 in key control table 1902 using the APU's
ID 1904. The DMAC then compares the APU key 1906 of the requesting
APU to the memory access key 2012 stored in the dedicated memory
segment 2010 associated with the storage location of the DRAM to
which the APU seeks access. If the two keys do not match, the DMA
command is not executed. On the other hand, if the two keys match,
the DMA command proceeds and the requested memory access is
executed.
[0108] An alternative embodiment is illustrated in FIG. 21. In this
embodiment, the PU also maintains a memory access control table
2102. Memory access control table 2102 contains an entry for each
sandbox within the DRAM. In the particular example of FIG. 21, the
DRAM contains 64 sandboxes. Each entry in memory access control
table 2102 contains an identification (ID) 2104 for a sandbox, a
base memory address 2106, a sandbox size 2108, a memory access key
2110 and an access key mask 2112. Base memory address 2106 provides
the address in the DRAM, which starts a particular memory sandbox.
Sandbox size 2108 provides the size of the sandbox and, therefore,
the endpoint of the particular sandbox.
[0109] FIG. 22 is a flow diagram of the steps for executing a DMA
command using key control table 1902 and memory access control
table 2102. In step 2202, an APU issues a DMA command to the DMAC
for access to a particular memory location or locations within a
sandbox. This command includes a sandbox ID 2104 identifying the
particular sandbox for which access is requested. In step 2204, the
DMAC looks up the requesting APU's key 1906 in key control table
1902 using the APU's ID 1904. In step 2206, the DMAC uses the
sandbox ID 2104 in the command to look up in memory access control
table 2102 the memory access key 2110 associated with that sandbox.
In step 2208, the DMAC compares the APU key 1906 assigned to the
requesting APU to the access key 2110 associated with the sandbox.
In step 2210, a determination is made of whether the two keys
match. If the two keys do not match, the process moves to step 2212
where the DMA command does not proceed and an error message is sent
to either the requesting APU, the PU or both. On the other hand, if
at step 2210 the two keys are found to match, the process proceeds
to step 2214 where the DMAC executes the DMA command.
[0110] The key masks for the APU keys and the memory access keys
provide greater flexibility to this system. A key mask for a key
converts a masked bit into a wildcard. For example, if the key mask
1908 associated with an APU key 1906 has its last two bits set to
"mask," designated by, e.g., setting these bits in key mask 1908 to
1, the APU key can be either a 1 or a 0 and still match the memory
access key. For example, the APU key might be 1010. This APU key
normally allows access only to a sandbox having an access key of
1010. If the APU key mask for this APU key is set to 0001, however,
then this APU key can be used to gain access to sandboxes having an
access key of either 1010 or 1011. Similarly, an access key 1010
with a mask set to 0001 can be accessed by an APU with an APU key
of either 1010 or 1011. Since both the APU key mask and the memory
key mask can be used simultaneously, numerous variations of
accessibility by the APUs to the sandboxes can be established.
[0111] The present invention also provides a new programming model
for the processors of system 101. This programming model employs
software cells 102. These cells can be transmitted to any processor
on network 104 for processing. This new programming model also
utilizes the unique modular architecture of system 101 and the
processors of system 101.
[0112] Software cells are processed directly by the APUs from the
APU's local storage. The APUs do not directly operate on any data
or programs in the DRAM. Data and programs in the DRAM are read
into the APU's local storage before the APU processes these data
and programs. The APU's local storage, therefore, includes a
program counter, stack and other software elements for executing
these programs. The PU controls the APUs by issuing direct memory
access (DMA) commands to the DMAC.
[0113] The structure of software cells 102 is illustrated in FIG.
23. As shown in this figure, a software cell, e.g., software cell
2302, contains routing information section 2304 and body 2306. The
information contained in routing information section 2304 is
dependent upon the protocol of network 104. Routing information
section 2304 contains header 2308, destination ID 2310, source ID
2312 and reply ID 2314. The destination ID includes a network
address. Under the TCP/IP protocol, e.g., the network address is an
Internet protocol (IP) address. Destination ID 2310 further
includes the identity of the PE and APU to which the cell should be
transmitted for processing. Source ID 2314 contains a network
address and identifies the PE and APU from which the cell
originated to enable the destination PE and APU to obtain
additional information regarding the cell if necessary. Reply ID
2314 contains a network address and identifies the PE and APU to
which queries regarding the cell, and the result of processing of
the cell, should be directed.
[0114] Cell body 2306 contains information independent of the
network's protocol. The exploded portion of FIG. 23 shows the
details of cell body 2306. Header 2320 of cell body 2306 identifies
the start of the cell body. Cell interface 2322 contains
information necessary for the cell's utilization. This information
includes global unique ID 2324, required APUs 2326, sandbox size
2328 and previous cell ID 2330.
[0115] Global unique ID 2324 uniquely identifies software cell 2302
throughout network 104. Global unique ID 2324 is generated on the
basis of source ID 2312, e.g. the unique identification of a PE or
APU within source ID 2312, and the time and date of generation or
transmission of software cell 2302. Required APUs 2326 provides the
minimum number of APUs required to execute the cell. Sandbox size
2328 provides the amount of protected memory in the required APUs'
associated DRAM necessary to execute the cell. Previous cell ID
2330 provides the identity of a previous cell in a group of cells
requiring sequential execution, e.g., streaming data.
[0116] Implementation section 2332 contains the cell's core
information. This information includes DMA command list 2334,
programs 2336 and data 2338. Programs 2336 contain the programs to
be run by the APUs (called "apulets"), e.g., APU programs 2360 and
2362, and data 2338 contain the data to be processed with these
programs. DMA command list 2334 contains a series of DMA commands
needed to start the programs. These DMA commands include DMA
commands 2340, 2350, 2355 and 2358. The PU issues these DMA
commands to the DMAC.
[0117] DMA command 2340 includes VID 2342. VID 2342 is the virtual
ID of an APU which is mapped to a physical ID when the DMA commands
are issued. DMA command 2340 also includes load command 2344 and
address 2346. Load command 2344 directs the APU to read particular
information from the DRAM into local storage. Address 2346 provides
the virtual address in the DRAM containing this information. The
information can be, e.g., programs from programs section 2336, data
from data section 2338 or other data. Finally, DMA command 2340
includes local storage address 2348. This address identifies the
address in local storage where the information should be loaded.
DMA commands 2350 contain similar information. Other DMA commands
are also possible.
[0118] DMA command list 2334 also includes a series of kick
commands, e.g., kick commands 2355 and 2358. Kick commands are
commands issued by a PU to an APU to initiate the processing of a
cell. DMA kick command 2355 includes virtual APU ID 2352, kick
command 2354 and program counter 2356. Virtual APU ID 2352
identifies the APU to be kicked, kick command 2354 provides the
relevant kick command and program counter 2356 provides the address
for the program counter for executing the program. DMA kick command
2358 provides similar information for the same APU or another
APU.
[0119] As noted, the PUs treat the APUs as independent processors,
not co-processors. To control processing by the APUS, therefore,
the PU uses commands analogous to remote procedure calls. These
commands are designated "APU Remote Procedure Calls" (ARPCs). A PU
implements an ARPC by issuing a series of DMA commands to the DMAC.
The DMAC loads the APU program and its associated stack frame into
the local storage of an APU. The PU then issues an initial kick to
the APU to execute the APU Program.
[0120] FIG. 24 illustrates the steps of an ARPC for executing an
apulet. The steps performed by the PU in initiating processing of
the apulet by a designated APU are shown in the first portion 2402
of FIG. 24, and the steps performed by the designated APU in
processing the apulet are shown in the second portion 2404 of FIG.
24.
[0121] In step 2410, the PU evaluates the apulet and then
designates an APU for processing the apulet. In step 2412, the PU
allocates space in the DRAM for executing the apulet by issuing a
DMA command to the DMAC to set memory access keys for the necessary
sandbox or sandboxes. In step 2414, the PU enables an interrupt
request for the designated APU to signal completion of the apulet.
In step 2418, the PU issues a DMA command to the DMAC to load the
apulet from the DRAM to the local storage of the APU. In step 2420,
the DMA command is executed, and the apulet is read from the DRAM
to the APU's local storage. In step 2422, the PU issues a DMA
command to the DMAC to load the stack frame associated with the
apulet from the DRAM to the APU's local storage. In step 2423, the
DMA command is executed, and the stack frame is read from the DRAM
to the APU's local storage. In step 2424, the PU issues a DMA
command for the DMAC to assign a key to the APU to allow the APU to
read and write data from and to the hardware sandbox or sandboxes
designated in step 2412. In step 2426, the DMAC updates the key
control table (KTAB) with the key assigned to the APU. In step
2428, the PU issues a DMA command "kick" to the APU to start
processing of the program. Other DMA commands may be issued by the
PU in the execution of a particular ARPC depending upon the
particular apulet.
[0122] As indicated above, second portion 2404 of FIG. 24
illustrates the steps performed by the APU in executing the apulet.
In step 2430, the APU begins to execute the apulet in response to
the kick command issued at step 2428. In step 2432, the APU, at the
direction of the apulet, evaluates the apulet's associated stack
frame. In step 2434, the APU issues multiple DMA commands to the
DMAC to load data designated as needed by the stack frame from the
DRAM to the APU's local storage. In step 2436, these DMA commands
are executed, and the data are read from the DRAM to the APU's
local storage. In step 2438, the APU executes the apulet and
generates a result. In step 2440, the APU issues a DMA command to
the DMAC to store the result in the DRAM. In step 2442, the DMA
command is executed and the result of the apulet is written from
the APU's local storage to the DRAM. In step 2444, the APU issues
an interrupt request to the PU to signal that the ARPC has been
completed.
[0123] The ability of APUs to perform tasks independently under the
direction of a PU enables a PU to dedicate a group of APUs, and the
memory resources associated with a group of APUs, to performing
extended tasks. For example, a PU can dedicate one or more APUs,
and a group of memory sandboxes associated with these one or more
APUs, to receiving data transmitted over network 104 over an
extended period and to directing the data received during this
period to one or more other APUs and their associated memory
sandboxes for further processing. This ability is particularly
advantageous to processing streaming data transmitted over network
104, e.g., streaming MPEG or streaming ATRAC audio or video data. A
PU can dedicate one or more APUs and their associated memory
sandboxes to receiving these data and one or more other APUs and
their associated memory sandboxes to decompressing and further
processing these data. In other words, the PU can establish a
dedicated pipeline relationship among a group of APUs and their
associated memory sandboxes for processing such data.
[0124] In order for such processing to be performed efficiently,
however, the pipeline's dedicated APUs and memory sandboxes should
remain dedicated to the pipeline during periods in which processing
of apulets comprising the data stream does not occur. In other
words, the dedicated APUs and their associated sandboxes should be
placed in a reserved state during these periods. The reservation of
an APU and its associated memory sandbox or sandboxes upon
completion of processing of an apulet is called a "resident
termination." A resident termination occurs in response to an
instruction from a PU.
[0125] FIGS. 25, 26A and 26B illustrate the establishment of a
dedicated pipeline structure comprising a group of APUs and their
associated sandboxes for the processing of streaming data, e.g.,
streaming MPEG data. As shown in FIG. 25, the components of this
pipeline structure include PE 2502 and DRAM 2518. PE 2502 includes
PU 2504, DMAC 2506 and a plurality of APUs, including APU 2508, APU
2510 and APU 2512. Communications among PU 2504, DMAC 2506 and
these APUs occur through PE bus 2514. Wide bandwidth bus 2516
connects DMAC 2506 to DRAM 2518. DRAM 2518 includes a plurality of
sandboxes, e.g., sandbox 2520, sandbox 2522, sandbox 2524 and
sandbox 2526.
[0126] FIG. 26A illustrates the steps for establishing the
dedicated pipeline. In step 2610, PU 2504 assigns APU 2508 to
process a network apulet. A network apulet comprises a program for
processing the network protocol of network 104. In this case, this
protocol is the Transmission Control Protocol/Internet Protocol
(TCP/IP). TCP/IP data packets conforming to this protocol are
transmitted over network 104. Upon receipt, APU 2508 processes
these packets and assembles the data in the packets into software
cells 102. In step 2612, PU 2504 instructs APU 2508 to perform
resident terminations upon the completion of the processing of the
network apulet. In step 2614, PU 2504 assigns APUs 2510 and 2512 to
process MPEG apulets. In step 2615, PU 2504 instructs APUs 2510 and
2512 also to perform resident terminations upon the completion of
the processing of the MPEG apulets. In step 2616, PU 2504
designates sandbox 2520 as a source sandbox for access by APU 2508
and APU 2510. In step 2618, PU 2504 designates sandbox 2522 as a
destination sandbox for access by APU 2510. In step 2620, PU 2504
designates sandbox 2524 as a source sandbox for access by APU 2508
and APU 2512. In step 2622, PU 2504 designates sandbox 2526 as a
destination sandbox for access by APU 2512. In step 2624, APU 2510
and APU 2512 send synchronize read commands to blocks of memory
within, respectively, source sandbox 2520 and source sandbox 2524
to set these blocks of memory into the blocking state. The process
finally moves to step 2628 where establishment of the dedicated
pipeline is complete and the resources dedicated to the pipeline
are reserved. APUs 2508, 2510 and 2512 and their associated
sandboxes 2520, 2522, 2524 and 2526, therefore, enter the reserved
state.
[0127] FIG. 26B illustrates the steps for processing streaming MPEG
data by this dedicated pipeline. In step 2630, APU 2508, which
processes the network apulet, receives in its local storage TCP/IP
data packets from network 104. In step 2632, APU 2508 processes
these TCP/IP data packets and assembles the data within these
packets into software cells 102. In step 2634, APU 2508 examines
header 2320 (FIG. 23) of the software cells to determine whether
the cells contain MPEG data. If a cell does not contain MPEG data,
then, in step 2636, APU 2508 transmits the cell to a general
purpose sandbox designated within DRAM 2518 for processing other
data by other APUs not included within the dedicated pipeline. APU
2508 also notifies PU 2504 of this transmission.
[0128] On the other hand, if a software cell contains MPEG data,
then, in step 2638, APU 2508 examines previous cell ID 2330 (FIG.
23) of the cell to identify the MPEG data stream to which the cell
belongs. In step 2640, APU 2508 chooses an APU of the dedicated
pipeline for processing of the cell. In this case, APU 2508 chooses
APU 2510 to process these data. This choice is based upon previous
cell ID 2330 and load balancing factors. For example, if previous
cell ID 2330 indicates that the previous software cell of the MPEG
data stream to which the software cell belongs was sent to APU 2510
for processing, then the present software cell normally also will
be sent to APU 2510 for processing. In step 2642, APU 2508 issues a
synchronize write command to write the MPEG data to sandbox 2520.
Since this sandbox previously was set to the blocking state, the
MPEG data, in step 2644, automatically is read from sandbox 2520 to
the local storage of APU 2510. In step 2646, APU 2510 processes the
MPEG data in its local storage to generate video data. In step
2648, APU 2510 writes the video data to sandbox 2522. In step 2650,
APU 2510 issues a synchronize read command to sandbox 2520 to
prepare this sandbox to receive additional MPEG data. In step 2652,
APU 2510 processes a resident termination. This processing causes
this APU to enter the reserved state during which the APU waits to
process additional MPEG data in the MPEG data stream.
[0129] Other dedicated structures can be established among a group
of APUs and their associated sandboxes for processing other types
of data. For example, as shown in FIG. 27, a dedicated group of
APUs, e.g., APUs 2702, 2708 and 2714, can be established for
performing geometric transformations upon three dimensional objects
to generate two dimensional display lists. These two dimensional
display lists can be further processed (rendered) by other APUs to
generate pixel data. To perform this processing, sandboxes are
dedicated to APUs 2702, 2708 and 2414 for storing the three
dimensional objects and the display lists resulting from the
processing of these objects. For example, source sandboxes 2704,
2710 and 2716 are dedicated to storing the three dimensional
objects processed by, respectively, APU 2702, APU 2708 and APU
2714. In a similar manner, destination sandboxes 2706, 2712 and
2718 are dedicated to storing the display lists resulting from the
processing of these three dimensional objects by, respectively, APU
2702, APU 2708 and APU 2714.
[0130] Coordinating APU 2720 is dedicated to receiving in its local
storage the display lists from destination sandboxes 2706, 2712 and
2718. APU 2720 arbitrates among these display lists and sends them
to other APUs for the rendering of pixel data.
[0131] The processors of system 101 also employ an absolute timer.
The absolute timer provides a clock signal to the APUs and other
elements of a PE which is both independent of, and faster than, the
clock signal driving these elements. The use of this absolute timer
is illustrated in FIG. 28.
[0132] As shown in this figure, the absolute timer establishes a
time budget for the performance of tasks by the APUs. This time
budget provides a time for completing these tasks which is longer
than that necessary for the APUs' processing of the tasks. As a
result, for each task, there is, within the time budget, a busy
period and a standby period. All apulets are written for processing
on the basis of this time budget regardless of the APUs' actual
processing time or speed. For example, for a particular APU of a
PE, a particular task may be performed during busy period 2802 of
time budget 2804. Since busy period 2802 is less than time budget
2804, a standby period 2806 occurs during the time budget. During
this standby period, the APU goes into a sleep mode during which
less power is consumed by the APU.
[0133] The results of processing a task are not expected by other
APUs, or other elements of a PE, until a time budget 2804 expires.
Using the time budget established by the absolute timer, therefore,
the results of the APUs' processing always are coordinated
regardless of the APUs' actual processing speeds.
[0134] In the future, the speed of processing by the APUs will
become faster. The time budget established by the absolute timer,
however, will remain the same. For example, as shown in FIG. 28, an
APU in the future will execute a task in a shorter period and,
therefore, will have a longer standby period. Busy period 2808,
therefore, is shorter than busy period 2802, and standby period
2810 is longer than standby period 2806. However, since programs
are written for processing on the basis of the same time budget
established by the absolute timer, coordination of the results of
processing among the APUs is maintained. As a result, faster APUs
can process programs written for slower APUs without causing
conflicts in the times at which the results of this processing are
expected.
[0135] In lieu of an absolute timer to establish coordination among
the APUS, the PU, or one or more designated APUs, can analyze the
particular instructions or microcode being executed by an APU in
processing an apulet for problems in the coordination of the APUs'
parallel processing created by enhanced or different operating
speeds. "No operation" ("NOOP") instructions can be inserted into
the instructions and executed by some of the APUs to maintain the
proper sequential completion of processing by the APUs expected by
the apulet. By inserting these NOOPs into the instructions, the
correct timing for the APUs' execution of all instructions can be
maintained.
[0136] As described above, each processing element (PE) comprises a
processing unit (PU) and a plurality of attached processing units
(APUS) for performing parallel processing of data by one or more
applications by the APUs coordinated and controlled by the PU. Some
variations of this PE were described in the context of a Broadband
Engine (BE) and the Visualizer (VS). Regardless, approaches to
power management must be considered in the design of a PE (or for
that matter, any type of processor). In general, any processor
produces heat as a result of using power in executing instructions
(e.g., processing data according to applications). In particular, a
PE, or any processor having a relatively high transistor density
and a relatively high switching speed (e.g., clock cycle), may
potentially damage itself by producing too much heat. This problem
may be addressed by power management. In addition, the use of power
management can reduce the operating cost of a processor through
reducing the average amount of power it uses, and may increase the
ability of the processor to be used in portable applications.
[0137] One form of straight-forward power management is simply to
design a PE to operate at maximum, or close to maximum, power
levels all of the time without generating enough heat to damage
itself. However, this approach further complicates the chip-level
design of the processor, and increases the expense of
manufacturing. These problems are increased by the use of
temperature sensors and the like in a mechanical feedback design
for the processor. To avoid the disadvantages of the above
approaches, a non-mechanical, feedback, power management approach
may be used. In such an approach, the execution of instructions,
and the observed, or estimated, correlation of average heat output
per instruction, is used to estimate the amount of heat being
generated over a period of time. With this information a power
management application may be able to dynamically alter the
execution of an application to avoid overheating.
[0138] Moreover, we have observed that the amount of heat generated
by a processor is directly proportional to the type of instruction
that the processor is executing, e.g., some instructions use more
of the processor than other instructions. Therefore, and in
accordance with the invention, a processing environment performs
power management by monitoring the number and type of processor
accesses and estimating an energy usage as a function thereof.
[0139] A simplified form of the inventive concept is shown in FIG.
29. A processing environment 2900 comprises a central processing
unit (CPU) 2905 and a number of instruction counters, as
represented by instruction counters 2910 and 2920 for use, e.g., in
a personal computer, network server, etc. The elements shown in
FIG. 29 can either represent an integrated circuit or a number of
discrete circuit elements. The flow of instructions to CPU 2905 for
execution occurs via bus 2906. In this example, the instruction set
of CPU 2905 is divided a priori into a number of types, at least
two of which are monitored by the arrangement shown in FIG. 29.
Illustratively, instruction counter 2910 monitors bus 2906 for
keeping count of the number of floating point instructions, while
instruction counter 2920 monitors bus 2906 for keeping count of the
number of fixed point instructions. Since an instruction set of a
processor is predefined, the design of an instruction counter is
straightforward and will not be described herein. Each instruction
counter is capable of being reset by CPU 2905 via control signal
2909. The value of the count of each type of instruction currently
stored in each instruction counter is available to CPU 2905 via bus
2907. Although shown as separate buses, a bi-directional bus can be
used in place of one, or more, of buses 2906 and 2907.
[0140] With continued reference to FIG. 29, an illustrative method
for use by processing environment 2900 for performing power
management is shown in FIG. 30. In step 3005, CPU 2905 resets, or
clears, instruction counters 2910 and 2920. In step 3010, CPU 2905
executes instructions, via bus 2906, of a program (not shown) for a
time period T. After the expiration of the time period T, CPU 2905
reads the values from instruction counters 2910 and 2920 in step
3015. In step 3020, CPU 2905 estimates a heat level as a function
of the type of instructions executed in the aforementioned time
period T. Of course, digital logic other than the CPU could also be
used to perform this estimation. This power management scheme
assumes that the period of time T will, in general, be much less
than the amount of time it takes for a significant heat change
within the processor. One illustration of estimating a heat level
is to assign a priori an average amount of heat, F, for each
floating point instruction and an average amount of heat, I, for
each fixed point instruction. An estimate of the heat level is then
determined by multiplying the values of the respective instruction
counters with the assigned average amounts of heat. For
example,
Estimated Heat level=(F)(f)+(I)(i);
[0141] where f and i represent the values of the count from
instruction counters 2910 and 2920, respectively.
[0142] Once an estimate of the heat level is determined, CPU 2905
can, if necessary, attempt corrective action if the estimated heat
level is above a predetermined value by, e.g., enforcing an idle
period before continuing execution of any programs, or setting an
alarm.
[0143] In another embodiment, a processing element (PE) includes a
processing unit (PU) and a number of attached processing units
(APUs), at least one of which is adapted to keep track of at least
some of the instructions being executed. For example, the
instruction set of an APU is divided a priori into a number of
types, each type associated with a different amount of power
consumption which serves as a proxy for heat generation. The APU
keeps track of the amount of each type of instruction--the power
information--executed over a time period and provides this power at
information to the PU. Stated another way, the APU monitors a rate
at which it executes instructions. Alternatively, the APU monitors
a rate at which another APU within the same PE (or within another
PE) executes instructions. The PU then performs power management as
a function of the power information provided by the APU. For
example, the PU may direct that a particular APU enter an idle
state to reduce power consumption. It should be noted that one, or
more APUs can provide their respective power information to the PU
or to another APU, which then performs, e.g., dynamic power
management. It is not necessary that every APU implement the
inventive concept.
[0144] An illustrative embodiment for a PE that dynamically
performs power management is shown in FIG. 31. PE 3100 is similar
to the above-described PEs and, as such, like numbers represent
similar elements and are not described further herein. For example,
see PU 203 of FIG. 2. PE 3100 comprises PU 203 and a number of APUs
as represented by APU 3110 (again, a PE can have any number of APUs
depending on the processing power desired). APU 3110 comprises four
instruction counters: 3115, 3120, 3125 and 3130. Each instruction
processed by an APU is illustratively designated as being either a
vector instruction or a scalar instruction. A vector instruction
can either be a floating point vector instruction or an integer
vector instruction. Similarly, each scalar instruction can either
be a floating point scalar instruction or an integer scalar
instruction. Thus, in this example, there are four possible types
of instructions, the execution of which is subject to generating
different amounts of heat. In descending order, it is assumed that
the floating point vector instruction (a count of which is kept by
instruction counter 3115) uses the most power and, therefore,
produces the most heat. The next highest power is consumed by the
integer vector instruction (count maintained by instruction counter
3120), and then the floating point scalar instruction (count
maintained by instruction counter 3125). Finally, the integer
scalar instruction generates the least amount of heat (a count of
which is maintained by instruction counter 3130). Thus, an APU, via
the instruction counters, will keep track of how many of each of
these four different types of instructions are executed over a time
period T. At the end of the time period T, the power information
for APU 3110, i.e., the four different instruction counts, are
provided to PU 203 (e.g., via an interrupt on bus 223) and APU 3110
resets the instruction counters.
[0145] A more detailed view of APU 3110 is shown in FIG. 32. Again,
APU 3110 is similar to the APU 402 described above with reference
to FIG. 4 and, as such, like numbers represent similar elements and
are not described further herein. APU 3110 additionally comprises
the four instruction counters 3115, 3120, 3125 and 3130, as
described above. These instruction counters monitor the
instructions being executed via bus 408 and store the respective
instruction counts. The instruction counters output the counts they
maintain onto bus 408 under appropriate conditions, e.g. upon
request, upon expiration of a predetermined time interval, or in an
interrupt driven fashion, such as upon exceeding a particular
threshold prior to expiration of the time interval.
[0146] An illustrative power management method for use in PE 3100
is shown in FIG. 33. Steps 3305 through 3320 are performed by APU
3110, while steps 3330 and 3345 are performed by PU 203. In step
3305, APU 3110 resets instruction counters 3115, 3120, 3125 and
3130. In step 3310, APU 3110 executes instructions of a program
(not shown) for a time period T. After the expiration of the time
period T, APU 3110 reads the values from instruction counters 3115,
3120, 3125 and 3130 in step 3315. In step 3320, APU 3110 provides,
e.g., via an interrupt, the power information, i.e., the four
instruction counts to PU 203 (or to another APU). The latter
receives these instruction counts in step 3330. In step 3335, PU
203 (or the other APU) estimates a heat level for APU 3110 as a
function of how many of each type instruction was executed in the
time period T. For example, the heat level estimation can be
performed using an equation similar to the one described above,
where each of the four types of instructions are associated a
priori with generating a particular average heat level, which can
be determined experimentally. In this case, the count value for
each instruction type is multiplied by the respective average heat
value and the results for each of the four types of instructions
are added together. Alternatively, it-can be assumed that an
average heat level is generated when any instruction is executed,
but the type of instruction is weighted differently. For example, a
floating point vector instruction can be assumed to generate four
times or six times the amount of heat of an integer scalar
instruction, etc. In this case, an estimate of the heat level is: 1
EstimatedHeatlevel = k = 1 K W k I k H ;
[0147] where, W.sub.k represents the weight for instruction type k,
I.sub.k represents the count for instruction type k, H is an
average heat level, and K is the number of different types of
instructions. It should be observed that the above-described
equation could be suitably modified to include fixed-level
estimates of contributions from other heat sources, e.g., other
APUs.
[0148] Thus, PU 203 (or the other APU) evaluates the counts of the
four different instructions over the time period T to check for
potential overheating of APU 3110. This is illustrated in steps
3340 and 3345, where, if the estimated heat level exceeds a
predetermined amount, PU 203 (or the other APU) dynamically alters
the execution of APU 3110 by, e.g., putting APU 3110 into an idle
mode for a predefined amount of time.
[0149] With respect to the above-mentioned time period T, this time
period can be predefined or determined dynamically. For example, in
the context of the above-described PE processing environment, the
time period T can be determined by a time budget associated with an
apulet being executed by the APU of interest. As another example, a
time budget can be specified in a header of a software cell, as
described in the foregoing.
[0150] There are several advantages to this power management
scheme. First, the breakdown of the different instructions allows a
much more accurate measure of the amount of energy being used,
which is assumed to be represented by the amount of heat being
generated in an APU. Second, it is possible, though not required,
to independently monitor each APU, which can then be independently
idled to cool off when necessary.
[0151] It should be noted that other power management variations
are possible. For example, FIG. 34 illustrates another form of
dynamic power control for a processing environment, or computing
module, including a number of processors, each providing power
information. In FIG. 34, a processing environment is represented by
PE 3400, which comprises PU 3410 and APUs 3415, 3420, 3425 and
3430. Other elements of a PE, e.g., the DMAC, are not shown for
simplicity. PU 3410 receives power information 3416, 3421, 3426 and
3431, from APUs 3415, 3420, 3425 and 3430, respectively. The
receipt of this power information by PU 3410 is assumed to occur
asynchronously from the APUs. In an alternative embodiment, the
power information may be provided from a first one of the APUs to a
second one of the APUs (either through the PU 3410 or via a more
direct connection shown by dashed line 3432). In this case, the
second APU desirably performs power management for the first APU,
including estimating the power consumption of the first APU.
[0152] Turning now to FIG. 35, an illustrative flow chart for
performing dynamic power management is shown. As can be observed
from this flow chart, PU 3410 selectively controls the APUs
independently and in a periodic fashion, e.g., at intervals of
every T2 seconds. Alternatively, one of the APUs can selectively
control the other APUs. In this example, in step 3505, once every
T2 seconds, PU 3410 estimates the heat levels for the APUs using
the heat information received for the most recent time interval. In
step 3510, PU 3410 determines if a heat level has been exceeded. If
not, execution ends. However, if at least one of the APUs is
producing too much heat, PU 3410 selects that APU generating the
most heat for entering an idle mode in step 3515. This occurs
notwithstanding that other APUs may have also exceeded a
predetermined heat level in the same time interval. Thus, PU 3410
can selectively, and progressively, continue to idle additional
APUs should the heat level remain above the predetermined
threshold.
[0153] As can be observed from the above, power management by
monitoring the number and type of processor accesses was
illustrated via instruction fetches. However, the invention is not
so limited and other types and/or combinations of processor
accesses can also be used. For example, monitoring of an address
space accessed by a processor over a period of time can also be
used, e.g., in the context of power management for a system. As
illustration, a processor can track access to a hard disk subsystem
over a period of time in a battery powered laptop and provide an
indicator to the user, where the indicator represents an estimate
for the amount of battery power left at the current usage rate.
[0154] As such, the foregoing merely illustrates the principles of
the invention and it will thus be appreciated that those skilled in
the art will be able to devise numerous alternative arrangements
which, although not explicitly described herein, embody the
principles of the invention and are within its spirit and scope.
For example, although in the illustrative embodiments, power
management is described in the context of heat management, the
inventive concept is extendible in a straightforward way to other
forms of power management such as conserving usage of portable
power sources such as a battery. In addition, although in the
above-described embodiment, the inventive concept is presented as
an alternative to the use of traditional forms of power management,
the inventive concept is not so limited and can be used in
conjunction with these traditional forms, e.g., temperature
sensors.
* * * * *