U.S. patent application number 16/027163 was filed with the patent office on 2018-11-01 for peripheral component.
This patent application is currently assigned to ATI TECHNOLOGIES ULC. The applicant listed for this patent is ADVANCED MICRO DEVICES, INC., ATI TECHNOLOGIES ULC. Invention is credited to Mark S. GROSSMAN, Stephen MOREIN, Shahin SOLKI.
Application Number | 20180314670 16/027163 |
Document ID | / |
Family ID | 41426838 |
Filed Date | 2018-11-01 |
United States Patent
Application |
20180314670 |
Kind Code |
A1 |
SOLKI; Shahin ; et
al. |
November 1, 2018 |
PERIPHERAL COMPONENT
Abstract
Embodiments of a peripheral component are described herein.
Embodiments provide alternatives to the use of an external bridge
integrated circuit (IC) architecture. For example, an embodiment
multiplexes a peripheral bus such that multiple processors in one
peripheral component can use one peripheral interface slot without
requiring an external bridge IC. Embodiments are usable with known
bus protocols.
Inventors: |
SOLKI; Shahin; (Richmond
Hill, CA) ; MOREIN; Stephen; (San Jose, CA) ;
GROSSMAN; Mark S.; (Palo Alto, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
ADVANCED MICRO DEVICES, INC.
ATI TECHNOLOGIES ULC |
Santa Clara
Markham |
CA |
US
CA |
|
|
Assignee: |
ATI TECHNOLOGIES ULC
Markham
CA
ADVANCED MICRO DEVICES, INC.
SANTA CLARA
CA
|
Family ID: |
41426838 |
Appl. No.: |
16/027163 |
Filed: |
July 3, 2018 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15374739 |
Dec 9, 2016 |
|
|
|
16027163 |
|
|
|
|
13764775 |
Feb 11, 2013 |
|
|
|
15374739 |
|
|
|
|
12340510 |
Dec 19, 2008 |
8373709 |
|
|
13764775 |
|
|
|
|
12245686 |
Oct 3, 2008 |
8892804 |
|
|
12340510 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G09G 5/363 20130101;
G09G 5/006 20130101; G06F 3/14 20130101; G06F 13/4282 20130101;
G06T 1/20 20130101; G09G 2330/021 20130101; G06F 2213/0026
20130101; G09G 2360/06 20130101; G06T 1/60 20130101 |
International
Class: |
G06F 13/42 20060101
G06F013/42; G06T 1/60 20060101 G06T001/60; G06T 1/20 20060101
G06T001/20; G09G 5/36 20060101 G09G005/36 |
Claims
1. A system comprising: a first bus; a CPU in communication with
the first bus; a first component having a first bridge in
communication with the CPU via the first bus and in communication
with a second bus; and a second component in communication with the
first component via the second bus and in communication with the
CPU via the first bus, through the second bus.
2. The system of claim 1 wherein: communication between the first
bus and the first component includes a bidirectional transfer of
data between the first bus and the first component; and
communication between the first component and the second component
includes a bidirectional transfer of data between the first
component and the second component via the second bus.
3. The system of claim 1 wherein the first and second components
are first and second graphics processing units (GPUs).
4. The system of claim 2, wherein: communication between the second
component and the CPU further includes a unidirectional transfer of
data from the second component to the CPU via a third bus.
5. The system of claim 4, wherein: communication from the CPU to
the second component occurs through the first component but not
through the third bus; and communication from the second component
to the CPU occurs through the third bus but not through the first
component.
6. The system of claim 3, wherein: communication between the second
component and the CPU occurs via the first component and does not
occur directly.
7. The system of claim 6, wherein the second component does not
include a bridge.
8. The system of claim 1, wherein the first bridge comprises a
peripheral component interconnect express bridge ("PCIe
bridge").
9. A method comprising: communicating between a CPU and a first
bus; communicating, via a first bus, with a first bridge of a first
component; and communicating, via the first bus, the first bridge,
and a second bus coupled between the first component and a second
component, between the second component and the CPU.
10. The method of claim 9, wherein: communication between the first
bus and the first component includes a bidirectional transfer of
data between the first bus and the first component; and
communication between the first component and the second component
includes a bidirectional transfer of data between the first
component and the second component via the second bus.
11. The method of claim 10, wherein: communication from the CPU to
the second component occurs through the first component but not
through the third bus; and communication from the second component
to the CPU occurs through the third bus but not through the first
component.
12. The method of claim 11, wherein: communication between the
second component and the CPU occurs via the first component and
does not occur directly.
13. The method of claim 12, wherein the second component does not
include a bridge.
14. The method of claim 9, wherein the first bridge comprises a
peripheral component interconnect express bridge ("PCIe bridge").
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. patent
application Ser. No. 15/374,739 filed on Dec. 9, 2016, which is a
continuation of U.S. patent application Ser. No. 13/764,775 filed
on Feb. 11, 2013 which is a continuation of U.S. application Ser.
No. 12/340,510 filed on Dec. 19, 2008 now U.S. Pat. No. 8,373,709
which is a continuation in part of U.S. patent application Ser. No.
12/245,686 filed on Oct. 3, 2008 now U.S. Pat. No. 8,892,804, each
of which are incorporated by reference as if fully set forth
herein.
FIELD OF THE INVENTION
[0002] The invention is in the field of data transfer in computer
and other digital systems.
BACKGROUND
[0003] As computer and other digital systems become more complex
and more capable, methods and hardware to enhance the transfer of
data between system components or elements continually evolve. Data
to be transferred include signals representing data, commands, or
any other signals. Speed and efficiency of data transfer is
particularly critical in systems that run very data-intensive
applications, such as graphics applications. In typical systems,
graphics processing capability is provided as a part of the central
processing unit (CPU) capability, or provided by a separate special
purpose processor such as a graphics processing unit (GPU) that
communicates with the CPU and assists in processing graphics data
for applications such as video games, etc. One or more GPUs may be
included in a system. In conventional multi-GPU systems, a bridged
host interface (for example a PCI express (PCIe.RTM.) bus)
interface must share bandwidth between peer to peer traffic and
host traffic. Traffic consists primarily of memory data transfers
but may often include commands. FIG. 1 is a block diagram of a
prior art system 100 that includes a root 102. A typical root 102
is a computer chipset, including a central processing unit (CPU), a
host bridge 104, and two endpoints EP0 106a and EP1 106b. Endpoints
are bus endpoints and can be various peripheral components, for
example special purpose processors such as graphics processing
units (GPUs). The root 102 is coupled to the bridge 104 by one or
more buses to communicate with peripheral components. Some
peripheral component endpoints (such as GPUs) require a relatively
large amount of bandwidth on the bus because of the large amount of
data involved in their functions. It would be desirable to provide
an architecture that reduced the number of components and yet
provided efficient data transfer between components. For example,
the cost of bridge integrated circuits (ICs) is relatively high. In
addition, the size of a typical bridge IC is comparable to the size
of a graphics processing unit (GPU) which requires additional
printed circuit board area and could add to layer counts. Bridge
ICs also require additional surrounding components for power,
straps, clock and possibly read only memory (ROM).
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 is a block diagram of a prior art processing system
with peripheral components.
[0005] FIG. 2 is a block diagram of portions of a multi-processor
system with a multiplexed peripheral component bus, according to an
embodiment.
[0006] FIG. 3 is a block diagram of portions of a processing system
with peripheral components, according to an embodiment.
[0007] FIG. 4 is a more detailed block diagram of a processing
system with peripheral components, according to an embodiment.
[0008] FIG. 5 is a block diagram of an embodiment in which one bus
endpoint includes an internal bridge.
[0009] FIG. 6 is a block diagram of an embodiment that includes
more than two bus endpoints, each including an internal bridge.
[0010] FIG. 7 is a block diagram illustrating views of memory space
from the perspectives of various components in a system, according
to an embodiment.
DETAILED DESCRIPTION
[0011] Embodiments of a multi-processor architecture and method are
described herein. Embodiments provide alternatives to the use of an
external bridge integrated circuit (IC) architecture. For example,
an embodiment multiplexes a peripheral bus such that multiple
processors can use one peripheral interface slot without requiring
an external bridge IC. Other embodiments include a system with
multiple bus endpoints coupled to a bus root via a host bus bridge
that is internal to at least one bus endpoint. In addition, the bus
endpoints are directly coupled to each other. Embodiments are
usable with known bus protocols.
[0012] FIG. 2 is a block diagram of portions of a multi-processor
system 700 with a multiplexed peripheral component bus, according
to an embodiment. In this example system, there are two GPUs, a
master GPU 702A and a slave GPU 702B. Each GPU 702 has 16
peripheral component interconnect express (PCIe.RTM.)) transmit
(TX) lanes and 16 PCIe.RTM. receive (RX) lanes. Each of GPUs 702
includes a respective data link layer 706 and a respective physical
layer (PHY) 704. Eight of the TX/RX lanes of GPU 702A are connected
to half of TX/RX lanes of a X16 PCIe.RTM. connector, or slot 708.
Eight of the TX/RX lanes of GPU 702B are connected to the remaining
TX/RX lanes of the X16 PCIe.RTM. connector or slot 708. The
remaining TX/RX lanes of each of GPU 702A and GPU 702B are
connected to each other, providing a direct, high-speed connection
between the GPUs 702.
[0013] The PCIe.RTM. x16 slot 708 (which normally goes to one GPU)
is split into two parts. Half of the slot is connected to GPU 702A
and the other half is connected to GPU 702B. Each GPU 702 basically
echoes back the other half of the data to the other GPU 702. That
is, data received by either GPU is forwarded to the other. Each GPU
702 sees the all of the data received by the PCIe.RTM. bus, and
internally each GPU 702 decides whether it is supposed to answer
the request or comments. Each GPU 702 then appropriately responds,
or does nothing. Some data or commands, such as "Reset" are
applicable to all of the GPUs 702.
[0014] From the system level point of view, or from the view of the
peripheral bus, there is only one PCIe.RTM. load (device) on the
PCI.RTM. bus. Either GPU 702A or GPU 702B is accessed based on
address. For example, for Address Domain Access, master GPU 702A
can be assigned to one half of the address domain and slave GPU
702B can assigned to the other half. The system can operate in a
Master/Slave mode or in a Single/Multi GPU modes, and the modes can
be identified by straps.
[0015] Various data paths are identified by reference numbers. A
reference clock (REF CLK) path is indicated by 711. An 8-lane RX-2
path is indicated by 709. An 8-lane RX-1 path is indicated by 713.
An 8-lane TX-1 path is indicated by 715. Control signals 710 are
non-PCIe.RTM. signals such as straps. The (PHY) 704 in each GPU 702
echoes the data to the proper lane or channel. Lane connection can
be done in the order, which helps to optimize silicon design and/or
to support PCIe.RTM. slots with less than 16 lanes. Two GPUs are
shown as an example of a system, but the architecture is scalable
to n-GPUs. In addition, GPUs 702 are one example of a peripheral
component that can be coupled as described. Any other peripheral
components that normally communicate with a peripheral component
bus in a system could be similarly coupled.
[0016] FIG. 3 is a block diagram of portions of a processing system
200 with peripheral components, according to an embodiment. System
200 includes a bus root 202 that is similar to the bus root 102 of
FIG. 1. The bus root 202 in an embodiment is a chipset including a
CPU and system memory. The root 202 is coupled via a bus 209 to an
endpoint EP0 206a that includes an internal bridge 205a. The bus
209 in an embodiment is a PCI express (PCIe.RTM.) bus, but
embodiments are not so limited. EP0 206a is coupled to another
endpoint EP1 206b. EP1 206b includes an internal bridge 205b. EP0
205a and EP1 205B are through their respective bridges via a bus
207. EP1 206b is coupled through its bridge 205b to the root 202
via a bus 211. Each of endpoints EP0 206a and EP1 206b includes
respective local memories 208a and 208b. From the perspective of
the root 202, 209 and 211 make up transmit and receive lanes
respectively of a standard bidirectional point to point data
link.
[0017] In an embodiment, EP0 206a and EP1 206b are identical. As
further explained below, in various embodiments, bridge 205b is not
necessary, but is included for the purpose of having one version of
an endpoint, such as one version of a GPU, rather than
manufacturing two different versions. Note that EP0 may be used
standalone by directly connecting it to root 202 via buses 209 and
207; similarly EP1 may be used standalone by directly connecting it
to root 202 via buses 207 and 211.
[0018] The inclusion of a bridge 205 eliminates the need for an
external bridge such as bridge 104 of FIG. 1 when both EP0 and EP1
are present. In contrast to the "Y" or "T" formation of FIG. 1,
system 200 moves data in a loop (in this case in a clockwise
direction). The left endpoint EP0 can send data directly to the
right endpoint EP1. The return path from EP1 to EP0 is through the
root 202. As such, the root has the ability to reflect a packet of
data coming in from EP1 back out to EP0. In other words, the
architecture provides the appearance of a peer-to-peer transaction
on the same pair of wires as is used for endpoint to root
transactions.
[0019] EP0 206a and EP1 206b are also configurable to operate in
the traditional configuration. That is, EP0 206a and EP1 206b are
each configurable to communicate directly with the root 202 via
buses 209 and 211, which are each bidirectional in such a
configuration.
[0020] FIG. 4 is a more detailed block diagram of a processing
system with peripheral components, according to an embodiment.
System 300 is similar to system 200, but additional details are
shown. System 300 includes a bus root 302 coupled to a system
memory 303. The bus root 302 is further coupled to an endpoint 305a
via a bus 309. For purposes of illustrating a particular
embodiment, endpoints 305a and 305b are GPUs, but embodiments are
not so limited. GPU0 305a includes multiple clients. Clients
include logic, such as shader units and decoder units, for
performing tasks. The clients are coupled to an internal bridge
through bus interface (I/F) logic, which control all of the read
operations and write operations performed by the GPU.
[0021] GPU0 305a is coupled to a GPU1 305b via a bus 307 from the
internal bridge of GPU0 305a to the internal bridge of GPU1 305b.
In an embodiment, GPU1 305b is identical to GPU0 305a and includes
multiple clients, an internal bridge and I/F logic. Each GPU
typically connects to a dedicated local memory unit often
implemented as GDDR DRAM. GPU1 305b is coupled to the bus root 302
via a bus 311. In one embodiment, as the arrows indicate, data and
other messages such as read requests and completions flow in a
clockwise loop from the bus root 302 to GPU0 305a to GPU1 305b.
[0022] In other embodiments, one of the GPUs 305 does not include a
bridge. In yet other embodiments, data flows counterclockwise
rather than clockwise.
[0023] In one embodiment, the protocol that determines data routing
is communicated with in such as ways as to make the architecture
appears the same as the architecture of FIG. 1. In particular, the
bridge in 305b must appear on link 307 to bridge 305a as an
upstream port, whereas the corresponding attach point on the bridge
in 305a must appear on link 309 to root 302 as a downstream port.
Furthermore, the embedded bridge must be able to see its outgoing
link as a return path for all requests it receives on its incoming
link, even though the physical routing of the two links is
different. This is achieved by setting the state of a Chain Mode
configuration strap for each GPU. If the strap is set to zero, the
bridge assumes both transmit and receive links are to an upstream
port, either a root complex or a bridge device. If the strap is set
to one, the bridge assumes a daisy-chain configuration.
[0024] In another embodiment, the peer to peer bridging function of
the root is a two-step process according to which GPU1 305b writes
data to the system memory 303, or buffer. Then as a separate
operation GPU0 305a reads the data back via the bus root 302.
[0025] The bus root 302 responds to requests normally, as if the
internal bridge were an external bridge (as in FIG. 1). In an
embodiment, the bridge of GPU0 305a is configured to be active,
while the bridge of GPU1 305b is configured to appear as a wire,
and simply pass data through. This allows the bus root 302 to see
buses 309 and 311 as a normal peripheral interconnect bus. When the
bus root reads from the bridge of GPU0 305a, this bridge sends the
data to pass through the bridge of GPU1 305b and return to the bus
root 302 as if the data came directly from GPU0 305a.
[0026] FIG. 5 is a block diagram of a system 400 in which one of
the multiple bus endpoints includes an internal bridge. System 400
includes a bus root 402, and an EP0 406a that includes a bridge
405a. EP0 406a is coupled to the root 402 through the bridge 405a
via a bus 409, and also to EP1b406b through the bridge 405a via a
bus 407. Each of endpoints EP0 406a and EP1 406b includes
respective local memories 408a and 408b.
[0027] FIG. 6 is a block diagram of a system 500 including more
than two bus endpoints, each including an internal bridge. System
500 includes a bus root 502, and an EP0 506a that includes a bridge
505a and a local memory 508a. System 500 further includes an EP1
506b that includes a bridge 505b and a local memory 508b, and an
EP1 506c that includes a bridge 505c and an internal memory
508c.
[0028] EP0 506a is coupled to the root 502 through the bridge 505a
via a bus 509, and also to EP1b 506b through the bridge 506b via a
bus 507a. EP0 506b is coupled to EP1c 506c through the bridge 506c
via a bus 507b. Other embodiments include additional endpoints that
are added into the ring configuration. In other embodiments, the
system includes more than two endpoints 506, but the rightmost
endpoint does not include an internal bridge. In yet other
embodiments the flow of data is counterclockwise as opposed
clockwise, as shown in the figures.
[0029] Referring again to FIG. 4, there are two logical ports on
the internal bridge according to an embodiment. One port is "on" in
the bridge of GPU0 305a, and one port is "off" in the bridge of
GPU1 305b. The bus root 302 may perform write operations by sending
requests on bus 309. A standard addressing scheme indicates to the
bridge to send the request to the bus I/F. If the request is for
GPU1 305b, the bridge routes the request to bus 307. So in an
embodiment, the respective internal bridges of GPU0 305a and GPU1
305b are programmed differently.
[0030] FIG. 7 is a block diagram illustrating the division of bus
address ranges and the view of memory space from the perspective of
various components. With reference also to FIG. 4, 602 is a view of
memory from the perspective of the bus root, or Host processor 302.
604 is a view of memory from the perspective of the GPU0 305a
internal bridge. 606 is a view of memory from the perspective of
the GPU1 305b internal bridge. The bus address range is divided
into ranges for GPU0 305a, GPU1 305b, and system 302 memory spaces.
The GPU0 305a bridge is set up so that incoming requests to the
GPU0 305a range are routed to its own local memory. Incoming
requests from the root or from GPU0 305a itself to GPU1 305b or
system 302 ranges are routed to the output port of GPU0 305a. The
GPU1305b bridge is set up slightly differently so that incoming
requests to the GPU1 305b range are routed to its own local memory.
Requests from GPU0 305a or from GPU1 305b itself to root or GPU0
305a ranges are routed to the output port of GPU1 305b.
[0031] The host sees the bus topology as being like the topology of
FIG. 1.
[0032] GPU1 305b can make its own request to the host processor 302
through its own bridge and it will pass through to the host
processor 302. When the host processor 302 is returning a request,
it goes through the bridge of GPU0 305a, which has logic for
determining where requests and data are to be routed.
[0033] Write operations from GPU1 305b to GPU0 305a can be
performed in two passes. GPU1 305b sends data to a memory location
in the system memory 303. Then separately, GPU0 305a reads the data
after it learns that the data is in the system memory 303.
[0034] Completion messages for read data requests and other
split-transaction operations must travel along the wires in the
same direction as the requests. Therefore in addition to the
address-based request routing described above, device-based routing
must be set up in a similar manner. For example, the internal
bridge of GPU0 305a recognizes that the path for both requests and
completion messages is via bus 307.
[0035] An embodiment includes power management to improve power
usage in lightly loaded usage cases. For example in a usage case
with little graphics processing, the logic of GPU1 305b is powered
off and the bridging function in GPU1 305b is reduced to a simple
passthrough function from input port to output port. Furthermore,
the function of GPU0 305a is reduced to not process transfers
routed from the input port to the output port. In an embodiment,
there is a separate power supply for the bridging function in GPU1
305b. Software detects the conditions under which to power down.
Embodiments include a separate power regulator and/or separate
internal power sources for bridges that are to be powered down
separately from the rest of the logic on the device.
[0036] Even in embodiments that do not include the power management
described above, system board area is conserved because an external
bridge (as in FIG. 1) is not required. The board area and power
required for the external bridge and its pins are conserved. On the
other hand, it is not required that each of the GPUs have its own
internal bridge. In another embodiment, GPU1 305b does not have an
internal bridge, as described with reference to FIG. 5.
[0037] The architecture of system 300 is practical in a system that
includes multiple slots for add-in circuit boards. Alternatively,
system 300 is a soldered system, such as on a mobile device.
[0038] Buses 307, 309 and 311 can be PCIe.RTM. buses or any other
similar peripheral interconnect bus.
[0039] Any circuits described herein could be implemented through
the control of manufacturing processes and maskworks which would be
then used to manufacture the relevant circuitry. Such manufacturing
process control and maskwork generation are known to those of
ordinary skill in the art and include the storage of computer
instructions on computer readable media including, for example,
Verilog, VHDL or instructions in other hardware description
languages.
[0040] Aspects of the embodiments described above may be
implemented as functionality programmed into any of a variety of
circuitry, including but not limited to programmable logic devices
(PLDs), such as field programmable gate arrays (FPGAs),
programmable array logic (PAL) devices, electrically programmable
logic and memory devices, and standard cell-based devices, as well
as application specific integrated circuits (ASICs) and fully
custom integrated circuits. Some other possibilities for
implementing aspects of the embodiments include microcontrollers
with memory (such as electronically erasable programmable read only
memory (EEPROM), Flash memory, etc.), embedded microprocessors,
firmware, software, etc. Furthermore, aspects of the embodiments
may be embodied in microprocessors having software-based circuit
emulation, discrete logic (sequential and combinatorial), custom
devices, fuzzy (neural) logic, quantum devices, and hybrids of any
of the above device types. Of course the underlying device
technologies may be provided in a variety of component types, e.g.,
metal-oxide semiconductor field-effect transistor (MOSFET)
technologies such as complementary metal-oxide semiconductor
(CMOS), bipolar technologies such as emitter-coupled logic (ECL),
polymer technologies (e.g., silicon-conjugated polymer and
metal-conjugated polymer-metal structures), mixed analog and
digital, etc.
[0041] The term "processor" as used in the specification and claims
includes a processor core or a portion of a processor. Further,
although one or more GPUs and one or more CPUs are usually referred
to separately herein, in embodiments both a GPU and a CPU are
included in a single integrated circuit package or on a single
monolithic die. Therefore a single device performs the claimed
method in such embodiments.
[0042] Unless the context clearly requires otherwise, throughout
the description and the claims, the words "comprise," "comprising,"
and the like are to be construed in an inclusive sense as opposed
to an exclusive or exhaustive sense; that is to say, in a sense of
"including, but not limited to." Words using the singular or plural
number also include the plural or singular number, respectively.
Additionally, the words "herein," "hereunder," "above," "below,"
and words of similar import, when used in this application, refer
to this application as a whole and not to any particular portions
of this application. When the word "or" is used in reference to a
list of two or more items, that word covers all of the following
interpretations of the word, any of the items in the list, all of
the items in the list, and any combination of the items in the
list.
[0043] The above description of illustrated embodiments of the
method and system is not intended to be exhaustive or to limit the
invention to the precise forms disclosed. While specific
embodiments of, and examples for, the method and system are
described herein for illustrative purposes, various equivalent
modifications are possible within the scope of the invention, as
those skilled in the relevant art will recognize. The teachings of
the disclosure provided herein can be applied to other systems, not
only for systems including graphics processing or video processing,
as described above. The various operations described may be
performed in a very wide variety of architectures and distributed
differently than described. In addition, though many configurations
are described herein, none are intended to be limiting or
exclusive.
[0044] In other embodiments, some or all of the hardware and
software capability described herein may exist in a printer, a
camera, television, a digital versatile disc (DVD) player, a DVR or
PVR, a handheld device, a mobile telephone or some other device.
The elements and acts of the various embodiments described above
can be combined to provide further embodiments. These and other
changes can be made to the method and system in light of the above
detailed description.
[0045] In general, in the following claims, the terms used should
not be construed to limit the method and system to the specific
embodiments disclosed in the specification and the claims, but
should be construed to include any processing systems and methods
that operate under the claims. Accordingly, the method and system
is not limited by the disclosure, but instead the scope of the
method and system is to be determined entirely by the claims.
[0046] While certain aspects of the method and system are presented
below in certain claim forms, the inventors contemplate the various
aspects of the method and system in any number of claim forms. For
example, while only one aspect of the method and system may be
recited as embodied in computer-readable medium, other aspects may
likewise be embodied in computer-readable medium. Such computer
readable media may store instructions that are to be executed by a
computing device (e.g., personal computer, personal digital
assistant, PVR, mobile device or the like) or may be instructions
(such as, for example, Verilog or a hardware description language)
that when executed are designed to create a device (GPU, ASIC, or
the like) or software application that when operated performs
aspects described above. The claimed invention may be embodied in
computer code (e.g., HDL, Verilog, etc.) that is created, stored,
synthesized, and used to generate GDSII data (or its equivalent).
An ASIC may then be manufactured based on this data.
[0047] Accordingly, the inventors reserve the right to add
additional claims after filing the application to pursue such
additional claim forms for other aspects of the method and
system.
* * * * *