U.S. patent application number 10/413916 was filed with the patent office on 2003-12-11 for command to transfer data from node state agent to memory bridge.
This patent application is currently assigned to Broadcom Corporation. Invention is credited to Rowlands, Joseph B..
Application Number | 20030229676 10/413916 |
Document ID | / |
Family ID | 29272888 |
Filed Date | 2003-12-11 |
United States Patent
Application |
20030229676 |
Kind Code |
A1 |
Rowlands, Joseph B. |
December 11, 2003 |
Command to transfer data from node state agent to memory bridge
Abstract
A node comprises a first agent, a second agent, and a third
agent, all coupled to an interconnect. The first agent is
configured to initiate a transaction on the interconnect to
transfer a coherency block to the second agent. The third agent is
configured to transmit the coherency block on the interconnect
during a data portion of the transaction instead of the first agent
responsive to a state of the coherency block in the third agent. In
some embodiments, the first agent may be designated to store the
node state of a remote cache block, and the second agent may be
responsible for internode coherency within the node.
Inventors: |
Rowlands, Joseph B.; (Santa
Clara, CA) |
Correspondence
Address: |
Lawrence J. Merkel
Meyertons, Hood, Kivlin, Kowert, & Goetzel, P.C.
P.O. Box 398
Austin
TX
78767
US
|
Assignee: |
Broadcom Corporation
Irvine
CA
|
Family ID: |
29272888 |
Appl. No.: |
10/413916 |
Filed: |
April 15, 2003 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10413916 |
Apr 15, 2003 |
|
|
|
10270028 |
Oct 11, 2002 |
|
|
|
60380740 |
May 15, 2002 |
|
|
|
Current U.S.
Class: |
709/216 |
Current CPC
Class: |
G06F 9/30072 20130101;
G06F 9/3004 20130101; G06F 12/082 20130101; G06F 12/0817 20130101;
G06F 9/30087 20130101; G06F 2212/1048 20130101; G06F 2212/2542
20130101; G06F 13/4027 20130101 |
Class at
Publication: |
709/216 |
International
Class: |
G06F 015/167 |
Claims
What is claimed is:
1. A node comprising: a first agent, a second agent, and a third
agent, all coupled to an interconnect; wherein the first agent is
configured to initiate a transaction on the interconnect to
transfer a coherency block to the second agent; and wherein the
third agent is configured to transmit the coherency block on the
interconnect during a data portion of the transaction instead of
the first agent responsive to a state of the coherency block in the
third agent.
2. The node as recited in claim 1 wherein the coherency block is a
remote coherency block.
3. The node as recited in claim 2 wherein the first agent is
designated to retain a state of the remote coherency block within
the node.
4. The node as recited in claim 3 wherein the first agent comprises
a cache, and wherein the cache is configured to initiate the
transaction in response to evicting the remote coherency block.
5. The node as recited in claim 4 wherein the cache is configured
to initiate a different transaction in response to evicting a local
coherency block.
6. The node as recited in claim 2 wherein the second agent is
responsible for maintaining internode coherency within the
node.
7. The node as recited in claim 1 wherein the third agent is
configured to signal the first agent during a response portion of
the transaction to indicate that the third agent is to transmit the
coherency block during the data portion.
8. The node as recited in claim 1 wherein the third agent is
configured to transmit the coherency block if the state of the
coherency block in the third agent is modified.
9. The node as recited in claim 1 wherein the first agent is
configured to transmit the coherency block on the interconnect
during the data portion of the transaction responsive to the third
agent not having the state of the coherency block that causes the
third agent to transmit the coherency block.
10. The node as recited in claim 1 wherein the third agent is
configured to snoop the transaction from the interconnect and is
configured to detect the state of the coherency block in response
to snooping.
11. A method comprising: a first agent initiating a transaction on
an interconnect to transfer a coherency block to a second agent; a
third agent transmitting the coherency block on the interconnect
during a data portion of the transaction instead of the first agent
responsive to a state of the coherency block in the third
agent.
12. The method as recited in claim 11 wherein the coherency block
is a remote coherency block.
13. The method as recited in claim 12 wherein the first agent is
designated to retain a state of the remote coherency block within
the node, and wherein the first agent comprises a cache, the method
further comprising the cache evicting the remote coherent block,
and wherein the first agent initiating the transaction is in
response to the evicting the remote coherency block.
14. The method as recited in claim 13 further comprising the cache
initiating a different transaction in response to evicting a local
coherency block.
15. The method as recited in claim 12 wherein the second agent is
responsible for maintaining internode coherency within the
node.
16. The method as recited in claim 11 wherein the state of the
coherency block in the third agent is modified.
17. The method as recited in claim 11 further comprising the first
agent transmitting the coherency block on the interconnect during
the data portion of the transaction responsive to the state in the
third agent not causing the third agent to transmit the coherency
block.
18. The method as recited in claim 11 further comprising the third
agent snooping the transaction from the interconnect, wherein the
third agent transmitting the coherency block is responsive to the
snooping.
19. A node comprising: an interconnect; and a cache coupled to the
interconnect, wherein the cache is configured to evict a cache
block stored therein, and wherein the cache is configured to
initiate a first transaction to transfer the cache block on the
interconnect responsive to the cache block being a local cache
block, and wherein the cache is configured to initiate a second
transaction to transfer the cache block on the interconnect
responsive to the cache block being a remote cache block, and
wherein the second transaction is different from the first
transaction.
20. The node as recited in claim 19 further comprising a memory
bridge coupled to the interconnect and responsible for internode
coherency within the node, wherein the memory bridge is configured
to receive the second transaction and to transfer the cache block
to another node.
21. The node as recited in claim 20 further comprising a memory
controller coupled to the interconnect and configured to receive
the first transaction, wherein the memory controller is configured
to transfer the cache block to a memory to which the memory
controller is configured to be coupled.
22. The node as recited in claim 19 further comprising an agent
coupled to the interconnect, wherein the agent is configured to
transmit the cache block in a data portion of the second
transaction instead of the cache responsive to the agent having a
modified copy of the cache block.
Description
[0001] This application claims benefit of priority to U.S.
Provisional Patent Application Serial No. 60/380,740, filed May 15,
2002. This application is a continuation in part of U.S. patent
application Ser. No. 10/270,028, filed on Oct. 11, 2002.
BACKGROUND OF THE INVENTION
[0002] 1. Field of the Invention
[0003] This invention is related to coherent memory systems,
including coherent distributed memory systems such as
cache-coherent nonuniform memory access (CC-NUMA) memory
systems.
[0004] 2. Description of the Related Art
[0005] Memory systems (including main memory and any caches in the
system) are often designed to be coherent. That is, even though
multiple copies of data from a given memory location may exist in
the memory system, a read of that memory location returns the most
recent data written to that memory location. The most recent data
written to that memory location may be determined via an order of
accesses to the memory location established according to the
coherency mechanism. Typically, a coherent system may include one
or more coherent agents and a memory controller coupled via an
interconnect of some kind.
[0006] One mechanism for scaling coherent systems to larger numbers
of coherent agents is a distributed memory system. In such a
system, memory is distributed among various nodes (which may also
include coherent agents), and the nodes are interconnected. A
coherent agent in one node may access memory in another node. One
class of techniques for maintaining coherency in a distributed
memory system is referred to as cache-coherent, nonuniform memory
access (CC-NUMA). In a CC-NUMA system, access to memory may have a
varying latency (e.g. memory in the same node as an agent may be
accessed more rapidly than memory in another node, and accesses to
different nodes may have varying latencies as well), but coherency
is maintained. Data from another node may be cached in a given
node.
[0007] In some cases, it may be desirable to designate an agent in
a node to maintain the state of data transferred to the node from
another node. If that agent is to discard the data, a mechanism is
desired to ensure that the data is deleted from other agents in the
node. Additionally, the mechanism may be required to cause the most
recently updated copy of the data, wherever it may reside within
the node, to be returned to the node from which the data was
read.
SUMMARY OF THE INVENTION
[0008] In one embodiment, a node comprises a first agent, a second
agent, and a third agent, all coupled to an interconnect. The first
agent is configured to initiate a transaction on the interconnect
to transfer a coherency block to the second agent. The third agent
is configured to transmit the coherency block on the interconnect
during a data portion of the transaction instead of the first agent
responsive to a state of the coherency block in the third agent. In
some embodiments, the first agent may be designated to store the
node state of a remote cache block, and the second agent may be
responsible for internode coherency within the node.
[0009] In another embodiment, a node includes an interconnect and a
cache coupled to the interconnect. The cache is configured to evict
a cache block stored therein. The cache is configured to initiate a
first transaction to transfer the cache block on the interconnect
responsive to the cache block being a local cache block, and is
configured to initiate a second transaction to transfer the cache
block on the interconnect responsive to the cache block being a
remote cache block. The second transaction is different from the
first transaction.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The following detailed description makes reference to the
accompanying drawings, which are now briefly described.
[0011] FIG. 1 is a block diagram of one embodiment of a node.
[0012] FIG. 2 is a block diagram of one embodiment of a cache, a
processor, and a memory bridge shown in FIG. 1 during one example
of a write flush command.
[0013] FIG. 3 is a block diagram of one embodiment of a cache, a
processor, and a memory bridge shown in FIG. 1 during another
example of a write flush command.
[0014] FIG. 4 is a flowchart illustrating operation of one
embodiment of the cache shown in FIGS. 1-3.
[0015] FIG. 5 is a table illustrating an exemplary set of coherency
commands and a table illustrating an exemplary set of transactions
according to one embodiment of the node shown in FIG. 1.
[0016] FIG. 6 is a block diagram of an address space supported by
one embodiment of the node shown in FIG. 1.
[0017] FIG. 7 is a decision tree illustrating operation of one
embodiment of a node for a read transaction on the interconnect
within the node.
[0018] FIG. 8 is a decision tree illustrating operation of one
embodiment of a node for a write transaction on the interconnect
within the node.
[0019] FIG. 9 is a diagram illustrating operation of one embodiment
of the memory bridge for remote coherency commands received by the
memory bridge.
[0020] FIG. 10 is a block diagram of one embodiment of a computer
accessible medium.
[0021] While the invention is susceptible to various modifications
and alternative forms, specific embodiments thereof are shown by
way of example in the drawings and will herein be described in
detail. It should be understood, however, that the drawings and
detailed description thereto are not intended to limit the
invention to the particular form disclosed, but on the contrary,
the intention is to cover all modifications, equivalents and
alternatives falling within the spirit and scope of the present
invention as defined by the appended claims.
DETAILED DESCRIPTION OF EMBODIMENTS
[0022] Node Overview
[0023] Turning now to FIG. 1, a block diagram of one embodiment of
a node 10 is shown. In the embodiment of FIG. 1, the node 10
includes one or more processors 12A-12N, a memory controller 14, a
switch 18, a set of interface circuits 20A-20C, a memory bridge 32,
and an L2 cache 36. The memory bridge 32 includes a remote line
directory 34. The node 10 includes an interconnect 22 to which the
processors 12A-12N, the memory controller 14, the L2 cache 36, the
memory bridge 32, and the remote line directory 34 are coupled. The
node 10 is coupled, through the memory controller 14, to a memory
24. The interface circuits 20A-20C each include a receive (Rx)
circuit 26A-26C and a transmit (Tx) circuit 28A-28C. The node 10 is
coupled to a set of interfaces 30A-30C through respective interface
circuits 20A-20C. The interface circuits 20A-20C are coupled to the
switch 18, which is further coupled to the memory bridge 32. A
configuration register 38 is also illustrated in FIG. 1, which
stores a node number (Node #) for the node 10. The configuration
register 38 is coupled to the L2 cache 36, the memory controller
14, the memory bridge 32, and the interface circuits 20A-20C in the
embodiment of FIG. 1. Additionally, the processors 12A-12N may be
coupled to receive the node number from the configuration register
38.
[0024] The node 10 may support intranode coherency for transactions
on the interconnect 22. Additionally, the node 10 may support
internode coherency with other nodes (e.g. a CC-NUMA coherency, in
one embodiment). Generally, as used herein, a memory bridge
includes circuitry designed to handle internode coherency functions
within a node. Particularly, in one embodiment, if a transaction on
the interconnect 22 (e.g. a transaction issued by the processors
12A-12N) accesses a cache block that is remote to the node 10 (i.e.
the cache block is part of the memory coupled to a different node)
and the node 10 does not have sufficient ownership to perform the
transaction, the memory bridge 32 may issue one or more coherency
commands to the other nodes to obtain the ownership (and a copy of
the cache block, in some cases). Similarly, if the transaction
access a local cache block but one or more other nodes have a copy
of the cache block, the memory bridge 32 may issue coherency
commands to other nodes. Still further, the memory bridge 32 may
receive coherency commands from other nodes, and may perform
transactions on the interconnect 22 to effect the coherency
commands.
[0025] In one embodiment, a node such as node 10 may have memory
coupled thereto (e.g. memory 24). The node may be responsible for
tracking the state, in other nodes, of each cache block from the
memory in that node. A node is referred to as the "home node" for
cache blocks from the memory assigned to that node. A node is
referred to as a "remote node" for a cache block if the node is not
the home node for that cache block. Similarly, a cache block is
referred to as a local cache block in the home node for that cache
block and as a remote cache block in other nodes.
[0026] Generally, a remote node may begin the coherency process by
requesting a copy of a cache block from the home node of that cache
block using a coherency command. The memory bridge 32 in the remote
node, for example, may detect a transaction on the interconnect 22
that accesses the cache block and may detect that the remote node
does not have sufficient ownership of the cache block to complete
the transaction (e.g. it may not have a copy of the cache block at
all, or may have a shared copy and may require exclusive ownership
to complete the transaction). The memory bridge 32 in the remote
node may generate and transmit the coherency command to the home
node to obtain the copy or to obtain sufficient ownership. The
memory bridge 32 in the home node may determine if any state
changes in other nodes are to be performed to grant the requested
ownership to the remote node, and may transmit coherency commands
(e.g. probe commands) to effect the state changes. The memory
bridge 32 in each node receiving the probe commands may effect the
state changes and respond to the probe commands. Once the responses
have been received, the memory bridge 32 in the home node may
respond to the remote node (e.g. with a fill command including the
cache block).
[0027] The remote line directory 34 may be used in the home node to
track the state of the local cache blocks in the remote nodes. The
remote line directory 34 is updated each time a cache block is
transmitted to a remote node, the remote node returns the cache
block to the home node, or the cache block is invalidated via
probes. As used herein, the "state" of a cache block in a given
node refers to an indication of the ownership that the given node
has for the cache block according to the coherency protocol
implemented by the nodes. Certain levels of ownership may permit no
access, read-only access, or read-write access to the cache block.
For example, in one embodiment, the modified, shared, and invalid
states are supported in the internode coherency protocol. In the
modified state, the node may read and write the cache block and the
node is responsible for returning the block to the home node if
evicted from the node. In the shared state, the node may read the
cache block but not write the cache block without transmitting a
coherency command to the home node to obtain modified state for the
cache block. In the invalid state, the node may not read or write
the cache block (i.e. the node does not have a valid copy of the
cache block). Other embodiments may use other coherency protocols
(e.g. the MESI protocol, which includes the modified, shared, and
invalid states and an exclusive state in which the cache block has
not yet been updated but the node is permitted to read and write
the cache block, or the MOESI protocol which includes the modified,
exclusive, shared, and invalid states and an owned state which
indicates that there may be shared copies of the block but the copy
in main memory is stale). In one embodiment, agents within the node
may implement the MESI protocol for intranode coherency. Thus, the
node may be viewed as having a state in the internode coherency and
individual agents may have a state in the intranode coherency
(consistent with the internode coherency state for the node
containing the agent).
[0028] Coherency commands are transmitted and received on one of
the interfaces 30A-30C by the corresponding interface circuit
20A-20C. The interface circuits 20A-20C receive coherency commands
for transmission from the memory bridge 32 and transmit coherency
commands received from the interfaces 30A-30C to the memory bridge
32 for processing, if the coherency commands require processing in
the node 10. In some embodiments, a coherency command may be
received that is passing through the node 10 to another node, and
does not require processing in the node 10. The interface circuits
20A-20C may be configured to detect such commands and retransmit
them (through another interface circuit 20A-20C) without involving
the memory bridge 32.
[0029] In the illustrated embodiment, the interface circuits
20A-20C are coupled to the memory bridge 32 through the switch 18
(although in other embodiments, the interface circuits 20A-20C may
have direct paths to the memory bridge 32). The switch 18 may
selectively couple the interface circuits 20A-20C (and particularly
the Rx circuits 26A-26C in the illustrated embodiment) to other
interface circuits 20A-20C (and particularly the Tx circuits
28A-28C in the illustrated embodiment) or to the memory bridge 32
to transfer received coherency commands. The switch 18 may also
selectively couple the memory bridge 32 to the interface circuits
20A-20C (and particularly to the Tx circuits 28A-28C in the
illustrated embodiment) to transfer coherency commands generated by
the memory bridge 32 from the memory bridge 32 to the interface
circuits 20A-20C for transmission on the corresponding interface
30A-30C. The switch 18 may have request/grant interfaces to each of
the interface circuits 20A-20C and the memory bridge 32 for
requesting transfers and granting those transfers. The switch 18
may have an input path from each source (the Rx circuits 26A-26C
and the memory bridge 32) and an output path to each destination
(the Tx circuits 28A-28C and the memory bridge 32), and may couple
a granted input path to a granted output path for transmission of a
coherency command (or a portion thereof, if coherency commands are
larger than one transfer through the switch 18). The couplings may
then be changed to the next granted input path and granted output
path. Multiple independent input path/output path grants may occur
concurrently.
[0030] In one embodiment, the interfaces 30A-30C may support a set
of virtual channels in which commands are transmitted. Each virtual
channel is defined to flow independent of the other virtual
channels, even though the virtual channels may share certain
physical resources (e.g. the interface 30A-30C on which the
commands are flowing). These virtual channels may be mapped to
internal virtual channels (referred to as switch virtual channels
herein). The switch 18 may be virtual-channel aware. That is, the
switch 18 may grant a coupling between a source and a destination
based not only on the ability of the source to transfer data and
the destination to receive data, but also on the ability of the
source to transfer data in a particular switch virtual channel and
the destination to receive data on that switch virtual channel.
Thus, requests from sources may indicate the destination and the
virtual channel on which data is to be transferred, and requests
from destinations may indicate the virtual channel on which data
may be received.
[0031] Generally speaking, a node may include one or more coherent
agents (dotted enclosure 16 in FIG. 1). In the embodiment of FIG.
1, the processors 12A-12N, the L2 cache 36, and the memory
controller 14 may be examples of coherent agents 16. Additionally,
the memory bridge 32 may be a coherent agent (on behalf of other
nodes). However, other embodiments may include other coherent
agents as well, such as a bridge to one or more I/O interface
circuits, or the I/O interface circuits themselves. Generally, an
agent includes any circuit which participates in transactions on an
interconnect. A coherent agent is an agent that is capable of
performing coherent transactions and operating in a coherent
fashion with regard to transactions. A transaction is a
communication on an interconnect. The transaction is sourced by one
agent on the interconnect, and may have one or more agents as a
target of the transaction. Read transactions specify a transfer of
data from a target to the source, while write transactions specify
a transfer of data from the source to the target. Other
transactions may be used to communicate between agents without
transfer of data, in some embodiments.
[0032] Each of the interface circuits 20A-20C are configured to
receive and transmit on the respective interfaces 30A-30C to which
they are connected. The Rx circuits 26A-26C handle the receiving of
communications from the interfaces 30A-30C, and the Tx circuits
28A-28C handle the transmitting of communications on the interfaces
30A-30C.
[0033] Each of the interfaces 30A-30C used for coherent
communications are defined to be capable of transmitting and
receiving coherency commands. Particularly, in the embodiment of
FIG. 1, those interfaces 30A-30C may be defined to receive/transmit
coherency commands to and from the node 10 from other nodes.
Additionally, other types of commands may be carried. In one
embodiment, each interface 30A-30C may be a HyperTransport.TM. (HT)
interface, including an extension to the HT interface to include
coherency commands (HTcc). Additionally, in some embodiments, an
extension to the HyperTransport interface to carry packet data
(Packet over HyperTransport, or PoHT) may be supported. As used
herein, coherency commands include any communications between nodes
that are used to maintain coherency between nodes. The commands may
include read or write requests initiated by a node to fetch or
update a cache block belonging to another node, probes to
invalidate cached copies of cache blocks in remote nodes (and
possibly to return a modified copy of the cache block to the home
node), responses to probe commands, fills which transfer data,
etc.
[0034] In some embodiments, one or more of the interface circuits
20A-20C may not be used for coherency management and may be defined
as packet interfaces. Such interfaces 30A-30C may be HT interfaces.
Alternative, such interfaces 30A-30C may be system packet
interfaces (SPI) according to any level of the SPI specification
set forth by the Optical Internetworking Forum (e.g. level 3, level
4, or level 5). In one particular embodiment, the interfaces may be
SPI-4 phase 2 interfaces. In the illustrated embodiment, each
interface circuit 20A-20C may be configurable to communicate on
either the SPI-4 interface or the HT interface. Each interface
circuit 20A-20C may be individually programmable, permitting
various combinations of the HT and SPI-4 interfaces as interfaces
30A-30C. The programming may be performed in any fashion (e.g.
sampling certain signals during reset, shifting values into
configuration registers (not shown) during reset, programming the
interfaces with configuration space commands after reset, pins that
are tied up or down externally to indicate the desired programming,
etc.). Other embodiments may employ any interface capable of
carrying packet data (e.g. the Media Independent Interface (MII) or
the Gigabit MII (GMII) interfaces, X.25, Frame Relay, Asynchronous
Transfer Mode (ATM), etc.). The packet interfaces may carry packet
data directly (e.g. transmitting the packet data with various
control information indicating the start of packet, end of packet,
etc.) or indirectly (e.g. transmitting the packet data as a payload
of a command, such as PoHT).
[0035] In embodiments which also support packet traffic, the node
10 may also include a packet direct memory access (DMA) circuit
configured to transfer packets to and from the memory 24 on behalf
of the interface circuits 20A-20C. The switch 18 may be used to
transmit packet data from the interface circuits 20A-20C to the
packet DMA circuit and from the packet DMA circuit to the interface
circuits 20A-20C. Additionally, packets may be routed from an Rx
circuit 26A-26C to a Tx circuit 28A-28C through the switch 18, in
some embodiments.
[0036] The processors 12A-12N may be designed to any instruction
set architecture, and may execute programs written to that
instruction set architecture. Exemplary instruction set
architectures may include the MIPS instruction set architecture
(including the MIPS-3D and MIPS MDMX application specific
extensions), the IA-32 or IA-64 instruction set architectures
developed by Intel Corp., the PowerPC instruction set architecture,
the Alpha instruction set architecture, the ARM instruction set
architecture, or any other instruction set architecture. The node
10 may include any number of processors (e.g. as few as one
processor, two processors, four processors, etc.).
[0037] The L2 cache 36 may be any type and capacity of cache
memory, employing any organization (e.g. set associative, direct
mapped, fully associative, etc.). In one embodiment, the L2 cache
36 may be an 8 way, set associative, 1 MB cache. The L2 cache 36 is
referred to as L2 herein because the processors 12A-12N may include
internal (L1) caches. In other embodiments the L2 cache 36 may be
an L1 cache, an L3 cache, or any other level as desired.
[0038] The memory controller 14 is configured to access the memory
24 in response to read and write transactions received on the
interconnect 22. The memory controller 14 may receive a hit signal
from the L2 cache, and if a hit is detected in the L2 cache for a
given read/write transaction, the memory controller 14 may not
respond to that transaction. The memory controller 14 may be
designed to access any of a variety of types of memory. For
example, the memory controller 14 may be designed for synchronous
dynamic random access memory (SDRAM), and more particularly double
data rate (DDR) SDRAM. Alternatively, the memory controller 16 may
be designed for DRAM, DDR synchronous graphics RAM (SGRAM), DDR
fast cycle RAM (FCRAM), DDR-II SDRAM, Rambus DRAM (RDRAM), SRAM, or
any other suitable memory device or combinations of the above
mentioned memory devices.
[0039] The interconnect 22 may be any form of communication medium
between the devices coupled to the interconnect. For example, in
various embodiments, the interconnect 22 may include shared buses,
crossbar connections, point-to-point connections in a ring, star,
or any other topology, meshes, cubes, etc. The interconnect 22 may
also include storage, in some embodiments. In one particular
embodiment, the interconnect 22 may comprise a bus. The bus may be
a split transaction bus, in one embodiment (i.e. having separate
address and data phases). The data phases of various transactions
on the bus may proceed out of order with the address phases. The
bus may also support coherency and thus may include a response
phase to transmit coherency response information. The bus may
employ a distributed arbitration scheme, in one embodiment. In one
embodiment, the bus may be pipelined. The bus may employ any
suitable signaling technique. For example, in one embodiment,
differential signaling may be used for high speed signal
transmission. Other embodiments may employ any other signaling
technique (e.g. TTL, CMOS, GTL, HSTL, etc.). Other embodiments may
employ non-split transaction buses arbitrated with a single
arbitration for address and data and/or a split transaction bus in
which the data bus is not explicitly arbitrated. Either a central
arbitration scheme or a distributed arbitration scheme may be used,
according to design choice. Furthermore, the bus may not be
pipelined, if desired.
[0040] Various embodiments of the node 10 may include additional
circuitry, not shown in FIG. 1. For example, the node 10 may
include various I/O devices and/or interfaces. Exemplary I/O may
include one or more PCI interfaces, one or more serial interfaces,
Personal Computer Memory Card International Association (PCMCIA)
interfaces, etc. Such interfaces may be directly coupled to the
interconnect 22 or may be coupled through one or more I/O bridge
circuits.
[0041] In one embodiment, the node 10 (and more particularly the
processors 12A-12N, the memory controller 14, the L2 cache 36, the
interface circuits 20A-20C, the memory bridge 32 including the
remote line directory 34, the switch 18, the configuration register
38, and the interconnect 22) may be integrated onto a single
integrated circuit as a system on a chip configuration. The
additional circuitry mentioned above may also be integrated.
Alternatively, other embodiments may implement one or more of the
devices as separate integrated circuits. In another configuration,
the memory 24 may be integrated as well. Alternatively, one or more
of the components may be implemented as separate integrated
circuits, or all components may be separate integrated circuits, as
desired. Any level of integration may be used.
[0042] It is noted that, while three interface circuits 20A-20C are
illustrated in FIG. 1, one or more interface circuits may be
implemented in various embodiments. As used herein, an interface
circuit includes any circuitry configured to communicate on an
interface according to the protocol defined for the interface. The
interface circuit may include receive circuitry configured to
receive communications on the interface and transmit the received
communications to other circuitry internal to the system that
includes the interface circuit. The interface circuit may also
include transmit circuitry configured to receive communications
from the other circuitry internal to the system and configured to
transmit the communications on the interface.
[0043] It is noted that the discussion herein may describe cache
blocks and maintaining coherency on a cache block granularity (that
is, each cache block has a coherency state that applies to the
entire cache block as a unit). Other embodiments may maintain
coherency on a different granularity than a cache block, which may
be referred to as a coherency block. A coherency block may be
smaller than a cache line, a cache line, or larger than a cache
line, as desired. The discussion herein of cache blocks and
maintaining coherency therefor applies equally to coherency blocks
of any size.
[0044] Write Flush Transaction
[0045] The node 10 may cache one or more remote cache blocks, and
may have a state in the node 10 for the cache block (the "node
state"). The node state may be the state in which the remote cache
block is provided to the node 10 (e.g. shared or modified) that is
tracked by the home node of the remote cache block. If the node
state in the node 10 is modified, the node 10 returns the cache
block to the home node when the node 10 evicts the cache block. In
such cases, the home node may update the state for the remote cache
block to indicate that the node 10 no longer has a copy.
Accordingly, when evicting a remote cache block with a modified
node state from the node 10, the node 10 ensures that any copies of
the remote cache block in the node 10 are invalidated.
[0046] The node 10 may include a transaction for transferring an
evicted remote cache block to the memory bridge 32 for return to
its home node. The transaction may be defined to transfer the cache
block addressed by the transaction to the agent responsible for
internode coherency (e.g. the memory bridge 32, in the embodiment
of FIG. 1). The transaction (referred to as a write flush
transaction, or WrFlush transaction, herein for convenience) may be
used for evicting remote cache blocks, while other transactions
(transactions with different command encodings on the interconnect
22) may be used to coherently or non-coherently transfer data among
the agents (including the memory bridge 32, for remote
transactions). The WrFlush transaction may be received by other
coherent agents within the node 10, and the coherent agents may
invalidate the remote cache block if stored therein in response to
the WrFlush transaction. Additionally, if a coherent agent that did
not initiate the WrFlush transaction has a modified copy of the
remote cache block, that coherent agent may transmit the cache
block during the data portion of the WrFlush transaction instead of
the initiating agent. The coherent agent may signal the initiating
agent (e.g. during a response portion of the transaction) that the
coherent agent is to transfer the data.
[0047] There may be various coherent agents in the node 10 (e.g.
the L2 cache 36, the processors 12A-12N, etc.), one or more of
which may have a copy of a given remote cache block. Multiple
copies may exist when various agents share the remote cache block
according to the intranode coherency scheme employed on the
interconnect 22. For example, in one embodiment, a MESI protocol
may be employed, and may be enforced by the coherent agents
snooping transactions on the interconnect 22. Other embodiments may
employ a MOESI coherency protocol or any other protocol. Thus, when
a remote cache block is being evicted from the node 10, the
ownership of the remote cache block may be not be determined by the
initiating agent prior to initiating the WrFlush transaction. The
WrFlush transaction permits an exclusive owner (according to the
intranode coherent protocol) to transfer the cache block, or the
initiating agent transfers the cache block if no other agent is the
exclusive owner.
[0048] In one embodiment, the node 10 may include an agent
designated for retaining the node state (that is, the state that
the home node is tracking for the node 10) of a remote cache block.
For example, the L2 cache 36 may be the designated agent (although
any agent may be designated in other embodiments). When the node 10
fetches a remote cache block into the node 10, the L2 cache 36 may
allocate a storage location to store the remote cache block. The
state of the cache block is stored in the L2 cache 36 as well.
[0049] The L2 cache 36 may permit other coherent agents to access
the remote cache block and to cache the remote cache block in any
state that is consistent with the node state (e.g. any state that
is less permissive than the node state or has the same
permissiveness as the node state, where read access is less
permissive than write access or read/write access). For example, if
the node state is modified, any state may be maintained by a
caching node. If the node state is shared, only the shared or
invalid states are consistent with the node state. Since other
coherent agents may have a copy of a remote cache block, if the L2
cache 36 evicts the remote cache block, the other coherent agents
may be forced to evict the remote cache block as well. In some
embodiments, the L2 cache 36 may force the eviction if the node
state is modified, but not if the node state is shared. However,
since the L2 cache 36 allows other coherent agents to cache the
remote cache block in the modified state (and thus permits these
coherent agents to modify the remote cache block), another coherent
agent may have a copy of the remote cache block which is more up to
date than the L2 cache's copy.
[0050] In such an embodiment, the L2 cache 36 may use the WrFlush
transaction to evict remote cache blocks having a modified node
state. The L2 cache 36 may initiate the WrFlush transaction on the
interconnect 22. The coherent agents may snoop the WrFlush
transaction, and may invalidate any copies of the remote cache
block in the coherent agents in response. Each coherent agent may
signal a snoop response to the L2 cache 36 during a response
portion of the WrFlush transaction. If a coherent agent has a
modified copy of the cache block, that coherent agent may signal
exclusive in the response portion. That coherent agent may then
become responsible for transmitting the remote cache block in the
data portion of the WrFlush transaction. If no coherent agent
signals exclusive, the L2 cache 36 transmits the remote cache block
in the data portion of the WrFlush transaction. In either case, the
memory bridge 32 receives the WrFlush transaction and the remote
cache block for transfer to the home node of the remote cache
block.
[0051] In one embodiment, the L2 cache 36 may use the WrFlush
transaction to evict remote cache blocks to the memory bridge 32
and may use a different transaction (e.g. a write (Wr) transaction)
to evict local cache blocks. The memory controller 14 may receive
the Wr transactions and may update the memory 24 with the local
cache blocks received from the L2 cache 36. In some embodiments,
the coherent agents do not snoop the Wr transaction. In some
embodiments, the memory controller 14 may not process the WrFlush
transaction. For example, the memory controller 14 may detect the
WrFlush transaction and ignore the transaction, or may detect that
the transaction address a remote cache block and ignore the
transaction.
[0052] In some embodiments, a coherent agent may not necessarily
determine if it has a modified copy of the cache block to signal
exclusive in the response portion. For example, the processors
12A-12N may maintain a set of snoop tags for the data caches, used
for snooping purposes. The snoop tags may track the state of the
tags in the data cache, except for the modified state. That is, the
snoop tags may have invalid, shared, or exclusive states (where
exclusive or modified in the data cache is represented by the
exclusive state). In such embodiments, a processors 12A-12N may
transfer the cache block for the WrFlush transaction if the
processor has the cache block in either the exclusive or the
modified state.
[0053] In one embodiment, the WrFlush transaction may be decoded by
the snooping agents (e.g. the processors 12A-12N) as an exclusive
read transaction. Thus, the WrFlush transaction may have the same
effect on the snooping agent as an exclusive read transaction would
have.
[0054] It is noted that, while the WrFlush transaction is described
above for transmitting a remote cache block to the memory bridge
32, the WrFlush transaction may have other uses as well. Generally,
the WrFlush transaction may be initiated by an initiating agent to
transfer a cache block to a second agent, where the cache block may
be transmitted by either the initiating agent or a third agent
dependent upon the state of the cache block in the third agent.
[0055] FIGS. 2 and 3 are block diagrams illustrating the L2 cache
36, the processor 12A, and the memory bridge 32 for example
operation of the WrFlush transaction according to one embodiment of
the node 10. FIG. 2 is an example of the WrFlush transaction in
which the processor 12A has the remote cache block in the shared
state, and FIG. 3 is an example of the WrFlush transaction in which
the processor 12A has the remote cache block in the modified state.
The remote cache block is addressed by the address "A1" in the
examples.
[0056] In the example of FIG. 2, the L2 cache 36 has the remote
cache block addressed by the address A1 in the modified state. In
other words, the node state of the remote cache block is modified.
Additionally, the processor 12A has a copy of the remote cache
block in the shared state. The L2 cache 36 evicts the cache block
and, detecting that the cache block is a remote cache block,
initiates a WrFlush transaction to transfer the remote cache block
to the memory bridge 32 (arrow 40). Additionally, the processor 12A
snoops the WrFlush transaction, and detects that the remote cache
block is cached in the shared state. The processor 12A invalidates
the remote cache block in its cache. During a response portion of
the WrFlush transaction, the processor 12A transmits a shared
response to the L2 cache 36 (arrow 42). Since there is not an
exclusive response, the L2 cache 36 transmits the remote cache
block during the data portion of the transaction (arrow 44). The
memory bridge 32 receives the remote cache block, and subsequently
transmits the remote cache block to the home node (not shown in
FIG. 2).
[0057] In the example of FIG. 3, the L2 cache 36 has the remote
cache block addressed by the address A1 in the modified state. In
other words, the node state of the remote cache block is modified.
Additionally, the processor 12A has a copy of the remote cache
block in the modified state. The processor 12A's copy is more up to
date than the L2 cache's copy, and thus should be transmitted to
the memory bridge 32 for return to the home node. The L2 cache 36
evicts the cache block and, detecting that the cache block is a
remote cache block, initiates a WrFlush transaction to transfer the
remote cache block to the memory bridge 32 (arrow 50).
Additionally, the processor 12A snoops the WrFlush transaction, and
detects that the remote cache block is cached in the modified
state. The processor 12A invalidates the remote cache block in its
cache. During a response portion of the WrFlush transaction, the
processor 12A transmits an exclusive response to the L2 cache 36
(arrow 52). Since there is an exclusive response, the L2 cache 36
inhibits transmission of the remote cache block during the data
portion of the transaction. Instead, the processor 12A transmits
the remote cache block during the data portion (arrow 54). The
memory bridge 32 receives the remote cache block, and subsequently
transmits the remote cache block to the home node (not shown in
FIG. 3).
[0058] It is noted that, in various embodiments, there may be
additional coherent agents coupled to the interconnect 22 (e.g.
additional processors 12A-12N, I/O bridges, I/O interfaces, etc.).
If any coherent agent responds exclusive, that coherent agent may
transmit the remote cache block in the data portion of the WrFlush
transaction (similar to FIG. 3). If no coherent agent responds
exclusive, the data portion of the WrFlush transaction may proceed
as in FIG. 2. As mentioned above, in some embodiments, the
processors 12A-12N (and/or other coherent snooping agents) may not
determine whether or not a remote cache block is exclusive or
modified to respond in the response phase. In such embodiments, the
snooping agent may respond exclusive if the remote cache block is
in the exclusive state or the modified state in the snooping agent,
and may signal with the data transfer whether or not the remote
cache block is modified.
[0059] It is noted that, in some embodiments, a given coherent
agent may invalidate a cache block in its cache at a delayed time
with regard to the snoop that causes the block to be invalidated.
For example, the snoop may be queued for later invalidation, and
the queue may be checked in parallel with a cache access to ensure
the invalidated cache block is not used.
[0060] Generally, a "snoop" may include receiving a transaction
initiated on the interconnect 22, and checking for the state of the
cache block addressed by the transaction in response to the
transaction. The result of the snoop may be signaled (e.g. during a
response portion of the transaction), and a state change may be
effected in the snooping agent for the cache block addressed by the
transaction (if appropriate based on the transaction and the
current state, according to the coherency protocol). A copy of a
cache block may be modified if the copy has been changed by the
caching agent from the copy that was supplied to that caching
agent.
[0061] It is noted that, in some embodiments, the L2 cache 36 may
be programmable to reserve one or more locations for storing remote
cache blocks. Such embodiments may inhibit selecting the reserved
locations for replacement to store local cache blocks that miss in
the L2 cache 36. For example, set associative embodiments may be
programmable to reserve one or more ways for remote cache
blocks.
[0062] It is noted that portions of the WrFlush transaction are
referred to above (e.g. the response portion and the data portion).
Depending on the nature of the interconnect 22, the portions may
occur in various fashions. For example, a packet-based interconnect
may include a request packet to initiate the transaction (including
the address A1), one or more response packets indicating the
response (which may be included in the response portion) and a data
packet (which may be included in the data portion). If the
interconnect is a bus, the response portion and data portion may be
phases on the bus (e.g. a split transaction bus may include an
address phase on the address bus, a response phase on the response
lines, and a data phase on the data bus).
[0063] Turning next to FIG. 4, a flowchart is shown illustrating
operation of one embodiment of the L2 cache 36 in response to
determining that a cache block is to be evicted (e.g. in response
to an L2 cache miss for a different cache block). The blocks in
FIG. 4 are illustrated in a particular order for ease of
understanding, but any order may be used. Furthermore, blocks may
be performed in parallel by combinatorial logic circuitry in the L2
cache 36. Still further, blocks may be pipelined over multiple
clock cycles and/or the flowchart of FIG. 4 may represent operation
over a multiple clock cycles.
[0064] The L2 cache 36 may determine if the evicted cache block is
in the modified state (decision block 60). If the evicted cache
block is not modified, in this embodiment, the L2 cache 36 need not
perform a transaction on the interconnect 22 irrespective of
whether the evicted cache block is local or remote. Thus, if the
evicted cache block is not in the modified state (decision block
60--"no" leg), the L2 cache 36 may drop the evicted cache block
(block 62).
[0065] If the evicted cache block is modified (decision block
60--"yes" leg), the L2 cache block may determine if the cache block
is a remote cache block (decision block 64). In one embodiment, a
remote cache block may be detectable by its address. For example,
one embodiment described below with regard to FIGS. 5-9 maps
addresses to nodes based on the most significant bits of address
and the node number in the node. Other embodiments may detect
remote cache blocks in other fashions (e.g. the L2 cache 36 may
store an indication for each cache block, indicating if it is local
or remote). If the cache block is local (decision block 64--"no"
leg), the L2 cache 36 initiates a write transaction (Wr
transaction) on the interconnect 22 to transfer the local cache
block to the memory controller 14 (block 66). This transaction may
not be snooped, and thus the L2 cache 36 may transmit the cache
block during the data portion of the Wr transaction.
[0066] If the cache block is remote (decision block 64--"yes" leg),
the L2 cache 36 initiates a WrFlush transaction on the interconnect
22 (block 68). The L2 cache 36 may wait for the response portion of
the WrFlush transaction and, if an exclusive response is received
(decision block 70--"yes" leg), the L2 cache 36 may inhibit
transmitting the cache block during the data portion. On the other
hand, if an exclusive response is not received (decision block
70--"no" leg), the L2 cache 36 transmits the cache block to the
memory bridge 32 during the data portion of the transaction (block
72).
[0067] Additional CC-NUMA Details, One Embodiment
[0068] FIGS. 5-9 illustrate additional details regarding one
exemplary embodiment of a CC-NUMA protocol that may be employed by
one embodiment of the node 10. The embodiment of FIGS. 5-9 is
merely exemplary. Numerous other implementations of CC-NUMA
protocols or other distributed memory system protocols may be used
in other embodiments.
[0069] Turning next to FIG. 5, a table 142 is shown illustrating an
exemplary set of transactions supported by one embodiment of the
interconnect 22 and a table 144 is shown illustrating an exemplary
set of coherency commands supported by one embodiment of the
interfaces 30. Other embodiments including subsets, supersets, or
alternative sets of commands may be used.
[0070] The transactions illustrated in the table 142 will next be
described. An agent in the node 10 may read a cache block (either
remote or local) using the read shared (RdShd) or read exclusive
(RdExc) transactions on the interconnect 22. The RdShd transaction
is used to request a shared copy of the cache block, and the RdExc
transaction is used to request an exclusive copy of the cache
block. If the RdShd transaction is used, and no other agent reports
having a copy of the cache block during the response phase of the
transaction (except for the L2 cache 36 and/or the memory
controller 14), the agent may take the cache block in the exclusive
state. In response to the RdExc transaction, other agents in the
node invalidate their copies of the cache block (if any).
Additionally, an exclusive (or modified) owner of the cache block
may supply the data for the transaction in the data phase. Other
embodiments may employ other mechanisms (e.g. a retry on the
interconnect 22) to ensure the transfer of a modified cache
block.
[0071] The write transaction (Wr) and the write invalidate
transaction (WrInv) may be used by an agent to write a cache block
to memory. The Wr transaction may be used by an owner having the
modified state for the block, since no other copies of the block
need to be invalidated. The WrInv transaction may be used by an
agent that does not have exclusive ownership of the block (the
agent may even have the invalid state for the block). The WrInv
transaction causes other agents to invalidate any copies of the
block, including modified copies. The WrInv transaction may be used
by an agent that is writing the entire cache block. For example, a
DMA that is writing the entire cache block with new data may use
the transaction to avoid a read transaction followed by a write
transaction.
[0072] The RdKill and RdInv transactions may be used by the memory
bridge 32 in response to probes received by the node 10 from other
nodes. The RdKill and RdInv transactions cause the initiator (the
memory bridge 32) to acquire exclusive access to the cache block
and cause any cache agents to invalidate their copies (transferring
data to the initiator similar to the RdShd and RdExc transactions).
In one embodiment, the RdKill transaction also cancels a
reservation established by the load-linked instruction in the MIPS
instruction set, while the RdInv transaction does not. In other
embodiments, a single transaction may be used for probes. In still
other embodiments, there may be a probe-generated transaction that
invalidates agent copies of the cache block (similar to the RdKill
and RdInv transactions) and another probe-generated transaction
that permits agents to retain shared copies of the cache block.
[0073] The WrFlush transaction is a write transaction which may be
initiated by an agent and another agent may have an exclusive or
modified copy of the block. The other agent provides the data for
the WrFlush transaction, or the initiating agent provides the data
if no other agent has an exclusive or modified copy of the block.
The WrFlush transaction may be used, in one embodiment as described
above by the L2 cache 36.
[0074] The Nop transaction is a no-operation transaction. The Nop
may be used if an agent is granted use of the interconnect 22 (e.g.
the address bus, in embodiments in which the interconnect 22 is a
split transaction bus) and the agent determines that it no longer
has a transaction to run on the interconnect 22.
[0075] The commands illustrated in the table 144 will next be
described. In the table 144, the command is shown as well as the
virtual channel in which the command travels on the interfaces 30.
The virtual channels may include, in the illustrated embodiment:
the coherent read (CRd) virtual channel; the probe (Probe) virtual
channel; the acknowledge (Ack) virtual channel; and coherent fill
(CFill) virtual channel. The CRd, Probe, Ack, and CFill virtual
channels are defined for the HTcc commands. There may be additional
virtual channels for the standard HT commands (e.g. non-posted
command (NPC) virtual channel, the posted command (PC) virtual
channel, and the response (RSP) virtual channel).
[0076] The cRdShd or cRdExc commands may be issued by the memory
bridge 32 in response to a RdShd or RdExc transactions on the
interconnect 22, respectively, to read a remote cache block not
stored in the node (or, in the case of RdExc, the block may be
stored in the node but in the shared state). If the cache block is
stored in the node (with exclusive ownership, in the case of the
RdExc transaction), the read is completed on the interconnect 22
without any coherency command transmission by the memory bridge
32.
[0077] The Flush and Kill commands are probe commands for this
embodiment. The memory bridge 32 at the home node of a cache block
may issue probe commands in response to a cRdShd or cRdExc command.
The memory bridge 32 at the home node of the cache block may also
issue a probe command in response to a transaction for a local
cache block, if one or more remote nodes has a copy of the cache
block. The Flush command is used to request that a remote modified
owner of a cache block return the cache block to the home node (and
invalidate the cache block in the remote modified owner). The Kill
command is used to request that a remote owner invalidate the cache
block. In other embodiments, additional probe commands may be
supported for other state change requests (e.g. allowing remote
owners to retain a shared copy of the cache block).
[0078] The probe commands are responded to (after effecting the
state changes requested by the probe commands) using either the
Kill_Ack or WB commands. The Kill_Ack command is an acknowledgement
that a Kill command has been processed by a receiving node. The WB
command is a write back of the cache block, and is transmitted in
response to the Flush command. The WB command may also be used by a
node to write back a remote cache block that is being evicted from
the node.
[0079] The Fill command is the command to transfer data to a remote
node that has transmitted a read command (cRdExc or cRdShd) to the
home node. The Fill command is issued by the memory bridge 32 in
the home node after the probes (if any) for a cache block have
completed.
[0080] Turning next to FIG. 6, a block diagram illustrating one
embodiment of an address space implemented by one embodiment of the
node 10 is shown. Addresses shown in FIG. 6 are illustrated as
hexadecimal digits, with an under bar ("_") separating groups of
four digits. Thus, in the embodiment illustrated in FIG. 6, 40 bits
of address are supported. In other embodiments, more or fewer
address bits may be supported.
[0081] In the embodiment of FIG. 6, the address space between
00.sub.--0000.sub.--0000 and 0F_FFFF_FFFF is treated as local
address space. Transactions generated by agents in the local
address space do not generate coherency commands to other nodes,
although coherency may be enforced within the node 10 for these
addresses. That is, the local address space is not maintained
coherent with other nodes. Various portions of the local address
space may be memory mapped to I/O devices, HT, etc. as desired.
[0082] The address space between 40.sub.--0000.sub.--0000 and
EF_FFFF_FFFF is the remote coherent space 148. That is, the address
space between 40.sub.--0000.sub.--0000 and EF_FFFF_FFFF is
maintained coherent between the nodes. Each node is assigned a
portion of the remote coherent space, and that node is the home
node for the portion. As shown in FIG. 1, each node is programmable
with a node number. The node number is equal to the most
significant nibble (4 bits) of the addresses for which that node is
the home node, in this embodiment. Thus, the node numbers may range
from 4 to E in the embodiment shown. Other embodiments may support
more or fewer node numbers, as desired. In the illustrated
embodiment, each node is assigned a 64 Gigabyte (GB) portion of the
memory space for which it is the home node. The size of the portion
assigned to each node may be varied in other embodiments (e.g.
based on the address size or other factors).
[0083] For a given coherent node, there is an aliasing between the
remote coherent space for which that node is the home node and the
local address space of that node. That is, corresponding addresses
in the local address space and the portion of the remote coherent
space for which the node is the home node access the same memory
locations in the memory 24 of the node (or are memory mapped to the
same I/O devices or interfaces, etc.). For example, the node having
node number 5 aliases the address space 50.sub.--0000.sub.--0000
through 5F_FFFF_FFFF to 00.sub.--0000.sub.--0000 through
0F_FFFF_FFFF respectively (arrow 146). Internode coherent accesses
to the memory 24 at the node 10 use the node-numbered address space
(e.g. 50.sub.--0000.sub.--0000 to 5F_FFFF_FFFF, if the node number
programmed into node 10 is 5) to access cache blocks in the memory
24. That is agents in other nodes and agents within the node that
are coherently accessing cache blocks in the memory use the remote
coherent space, while access in the local address space are not
maintained coherent with other nodes (even though the same cache
block may be accessed). Thus the addresses are aliased, but not
maintained coherent, in this embodiment. In other embodiments, the
addresses in the remote coherent space and the corresponding
addresses in the local address space may be maintained
coherent.
[0084] A cache block is referred to as local in a node if the cache
block is part of the memory assigned to the node (as mentioned
above). Thus, the cache block may be local if it is accessed from
the local address space or the remote coherent space, as long as
the address is in the range for which the node is the home node.
Similarly, a transaction on the interconnect 22 that accesses a
local cache block may be referred to as a local transaction or
local access. A transaction on the interconnect 22 that accesses a
remote cache block (via the remote coherent address space outside
of the portion for which the node is the home node) may be referred
to as a remote transaction or a remote access.
[0085] The address space between 10.sub.--0000.sub.--0000 and
3F_FFFF_FFFF may be used for additional HT transactions (e.g.
standard HT transactions) in the illustrated embodiment.
Additionally, the address space between F0.sub.--0000.sub.--0000
and FF_FFFF_FFFF may be reserved in the illustrated embodiment.
[0086] It is noted that, while the most significant nibble of the
address defines which node is being accessed, other embodiments may
use any other portion of the address to identify the node.
Furthermore, other information in the transaction may be used to
identify remote versus local transactions, in other embodiments
(e.g. command type, control information transmitted in the
transaction, etc.).
[0087] Turning next to FIG. 7, a decision tree for a read
transaction to a memory space address on the interconnect 22 of a
node 10 is shown for one embodiment. The decision tree may
illustrate operation of the node 10 for the read transaction for
different conditions of the transaction, the state of the cache
block accessed by the transaction, etc. The read transaction may,
in one embodiment, include the RdShd, RdExc, RdKill, and RdInv
transactions shown in the table 142 of FIG. 5. Each dot on the
lines within the decision tree represents a divergence point of one
or more limbs of the tree, which are labeled with the corresponding
conditions. Where multiple limbs emerge from a dot, taking one limb
also implies that the conditions for the other limbs are not met.
In FIG. 7, the exclamation point ("!") is used to indicate a
logical NOT. Not shown in FIG. 7 is the state transition made by
each coherent agent which is caching a copy of the cache block for
the read transaction. If the read transaction is RdShd, the
coherent agent may retain a copy of the cache block in the shared
state. Otherwise, the coherent agent invalidates its copy of the
cache block.
[0088] The transaction may be either local or remote, as mentioned
above. For local transactions, if the transaction is uncacheable,
then a read from the memory 24 is performed (reference numeral
150). In one embodiment, the transaction may include an indication
of whether or not the transaction is cacheable. If the transaction
is uncacheable, it is treated as a non-coherent transaction in the
present embodiment.
[0089] If the local transaction is cacheable, the operation of the
node 10 is dependent on the response provided during the response
phase of the transaction. In one embodiment, each coherent agent
responds with the state of the cache block in that agent. For
example, each coherent agent may have an associated shared (SHD)
and exclusive (EXC) signal. The agent may signal invalid state by
deasserting both the SHD and EXC signals. The agent may signal
shared state by asserting the SHD signal and deasserting the EXC
signal. The agent may signal exclusive state (or modified state) by
asserting the EXC signal and deasserting the SHD signal. The
exclusive and modified states may be treated the same in the
response phase in this embodiment, and the exclusive/modified owner
may provide the data. The exclusive/modified owner may provide,
concurrent with the data, an indication of whether the state is
exclusive or modified. While each agent may have its own SHD and
EXC signals in this embodiment (and the initiating agent may
receive the signals from each other agent), in other embodiments a
shared SHD and EXC signal may be used by all agents.
[0090] If both the SHD and EXC responses are received for the local
transaction, an error has occurred (reference numeral 152). The
memory controller may return a fatal error indication for the read
transaction, in one embodiment. If the response is exclusive (SHD
deasserted, EXC asserted) the exclusive owner provides the data for
the read transaction on the interconnect 22 (reference numeral
154). If the exclusive owner is the memory bridge 32 (as recorded
in the remote line directory 34), then a remote node has the cache
block in the modified state. The memory bridge 32 issues a probe
(Flush command) to retrieve the cache block from that remote node.
The memory bridge 32 may supply the cache block returned from the
remote node as the data for the read on the interconnect 22.
[0091] If the response is shared (SHD asserted, EXC deasserted),
the local transaction is RdExc, and the memory bridge 32 is one of
the agents reporting shared, then at least one remote node may have
a shared copy of the cache block. The memory bridge 32 may initiate
a probe (Kill command) to invalidate the shared copies of the cache
block in the remote node(s) (reference numeral 156). In one
embodiment, the data may be read from memory (or the L2 cache 36)
for this case, but the transfer of the data may be delayed until
the remote node(s) have acknowledged the probe. The memory bridge
32 may signal the memory controller 14/L2 cache 36 when the
acknowledgements have been received. In one embodiment, each
transaction may have a transaction identifier on the interconnect
22. The memory bridge 32 may transmit the transaction identifier of
the RdExc transaction to the memory controller 14/L2 cache 36 to
indicate that the data may be transmitted.
[0092] If the response is shared, the local transaction is RdExc,
and the sharing agents are local agents (i.e. the memory bridge 32
does not report shared), then the L2 cache 36 or the memory
controller 14 may supply the data, depending on whether or not
there is an L2 hit for the cache block (reference numeral 158).
Similarly, if the response is shared and the transaction is not
RdExc, the L2 cache 36 or the memory controller 14 may supply the
data dependent on whether or not there is an L2 hit for the cache
block.
[0093] If the transaction is remote and uncacheable, then the
memory bridge 32 may generate a noncoherent read command on the
interfaces 30 to read the data. For example, a standard HT read
command may be used (reference numeral 160). If the remote
transaction is cacheable and the response on the interconnect 22 is
exclusive, then the exclusive owner supplies the data for the read
(reference numeral 162). If the remote transaction is cacheable,
the response is not exclusive, the cache block is an L2 cache hit,
and the transaction is either RdShd or the transaction is RdExc and
the L2 cache has the block in the modified state, then the L2 cache
36 supplies the data for the read (reference numeral 164).
Otherwise, the memory bridge 32 initiates a corresponding read
command to the home node of the cache block (reference numeral
166).
[0094] Turning next to FIG. 8, a decision tree for a write
transaction to a memory space address on the interconnect 22 of a
node 10 is shown for one embodiment. The decision tree may
illustrate operation of the node for the write transaction for
different conditions of the transaction, the state of the cache
block accessed by the transaction, etc. The write transaction may,
in one embodiment, include the Wr, WrInv, and WrFlush transactions
shown in the table 142 of FIG. 5. Each dot on the lines within the
decision tree represents a divergence point of one or more limbs of
the tree, which are labeled with the corresponding conditions.
Where multiple limbs emerge from a dot, taking one limb also
implies that the conditions for the other limbs are not met. In
FIG. 8, the exclamation point ("!") is used to indicate a logical
NOT. Not shown in FIG. 8 is the state transition made by each
coherent agent which is caching a copy of the cache block for the
write transaction. The coherent agent invalidates its copy of the
cache block.
[0095] If the transaction is a local transaction, and the
transaction is a WrInv transaction that hits in the remote line
directory 34 (i.e. a remote node is caching a copy of the cache
block), the memory controller 14 (and the L2 cache 36, if an L2
hit) updates with the write data (reference numeral 170).
Additionally, the memory bridge 32 may generate probes to the
remote nodes indicated by the remote line directory 34. The update
of the memory/L2 cache may be delayed until the probes have been
completed, at which time the memory bridge 32 may transmit the
transaction identifier of the WrInv transaction to the L2 cache
36/memory controller 14 to permit the update.
[0096] If the local transaction is uncacheable or if the L2 cache
36 is the master of the transaction (that is, the L2 cache 36
initiated the transaction), then the memory controller 14 updates
with the data (reference numeral 172). If the local transaction is
cacheable, the memory controller 14 and/or the L2 cache 36 updates
with the data based on whether or not there is an L2 cache hit
(and, in some embodiments, based on an L2 cache allocation
indication in the transaction, which allows the source of the
transaction to indicate whether or not the L2 cache allocates a
cache line for an L2 cache miss) (reference numeral 174).
[0097] If the transaction is a remote transaction, the transaction
is a WrFlush transaction, and the response to the transaction is
exclusive, the exclusive owner supplies the data (reference numeral
176). If the remote WrFlush transaction results in a non-exclusive
response (shared or invalid), the L2 cache 36 supplies the data of
the WrFlush transaction. In one embodiment, the L2 cache 36 retains
the state of the node as recorded in the home node, and the L2
cache 36 uses the WrFlush transaction to evict a remote cache block
which is in the modified state in the node. Thus, if another agent
has the cache block in the exclusive state, that agent may have a
more recent copy of the cache block that should be returned to the
home node. Otherwise, the L2 cache 36 supplies the block to be
returned to the home node (reference numeral 182). In either case,
the memory bridge 32 may capture the WrFlush transaction and data,
and may perform a WB command to return the cache block to the home
node.
[0098] If the remote transaction is not a WrFlush transaction, and
is not cache coherent, the memory bridge 32 receives the write
transaction and performs a noncoherent Wr command (e.g. a standard
HT write) to transmit the cache block to the home node (reference
numeral 180). If the remote transaction is not a WrFlush
transaction, is cache coherent, and is an L2 hit, the L2 cache 36
may update with the data (reference numeral 182).
[0099] Turning next to FIG. 9, a block diagram illustrating
operation of one embodiment of the memory bridge 32 in response to
various coherency commands received from the interface circuits
20A-20C is shown. The received command is shown in an oval.
Commands initiated by the memory bridge 32 in response to the
received command (and the state of the affected cache block as
indicated in the remote line directory 34) are shown in solid
boxes. Dotted boxes are commands received by the memory bridge 32
in response to the commands transmitted in the preceding solid
boxes. The cache block affected by a command is shown in
parentheses after the command.
[0100] In one embodiment, the remote line directory 34 may be
accessed in response to a transaction on the interconnect 22. In
such an embodiment, the memory bridge 32 may initiate a transaction
on the interconnect 22 in response to certain coherent commands in
order to retrieve the remote line directory 34 (as well as to
affect any state changes in the coherent agents coupled to the
interconnect 22, if applicable). In other embodiments, the memory
bridge 32 may be configured to read the remote line directory 34
prior to generating a transaction on the interconnect 22, and may
conditionally generate a transaction if needed based on the state
of the remote line directory 34 for the requested cache block.
Additionally, in one embodiment, the remote line directory 34 may
maintain the remote state for a subset of the local cache blocks
that are shareable remotely (e.g. a subset of the portion of the
remote coherent space 148 that is assigned to the local node). If a
cache block is requested by a remote node using a coherency command
and there is no entry in the remote line directory 34 for the cache
block, then a victim cache block may be replaced in the remote line
directory 34 (and probes may be generated to invalidate the victim
cache block in remote nodes). In other embodiments, the remote line
directory 34 may be configured to track the state of each cache
block in the portion of the remote coherent space 148 that is
assigned to the local node. In such embodiments, operations related
to the victim cache blocks may be omitted from FIG. 9.
[0101] For a cRdShd command for cache block "A" received by the
memory bridge 32 (reference numeral 190), the memory bridge 32 may
generate a RdShd transaction on the interconnect 22. Based on the
remote line directory (RLD) state for the cache block A, a number
of operations may occur. If the RLD state is shared, or invalid and
there is an entry available for allocation without requiring a
victim cache block to be evicted ("RLD empty" in FIG. 9), then the
memory bridge 32 may transmit a fill command to the remote node
with the data supplied to the memory bridge 32 in response to the
RdShd transaction on the interconnect 22 (reference numeral 192).
On the other hand, if the RLD state is invalid and an eviction of a
victim block is used to free an RLD entry for cache block A, then
the memory bridge 32 may transmit probes to the remote nodes having
copies of the victim cache block. If the victim cache block is
shared, the memory bridge 32 may transmit a Kill command (or
commands, if multiple nodes are sharing the victim cache block) for
the victim block (reference numeral 194). The remote nodes respond
with Kill_Ack commands for the victim block (reference numeral
196). If the victim block is modified, the memory bridge 32 may
transmit a Flush command to the remote node having the modified
state (reference numeral 198). The remote node may return the
modified block with a WB command (reference numeral 200). In either
case of evicting a victim block, the memory bridge 32 may, in
parallel, generate a Fill command for the cache block A (reference
numeral 92, via arrow 202). Finally, if the RLD state is modified
for the cache block A, the memory bridge 32 may generate a Flush
command for the cache block A to the remote node (reference numeral
204), which responds with a WB command and the cache block A
(reference numeral 206). The memory bridge 32 may then transmit the
Fill command with the cache block A provided via the write back
command (reference numeral 192).
[0102] In response to a cRdExc command for a cache block A
(reference numeral 210), operation may be similar to the cRdShd
case for some RLD states. Similar to the cRdShd case, the memory
bridge 32 may initiate a RdExc transaction on the interconnect 22
in response to the cRdExc command. Similar to the cRdShd case, if
the RLD is invalid and no eviction of a victim cache block is
needed in the RLD to allocate an entry for the cache block A, then
the memory bridge 32 may supply the cache block supplied on the
interconnect 22 for the RdExc transaction in a fill command to the
remote node (reference numeral 212). Additionally, if the RLD state
is invalid for the cache block A and a victim cache block is
evicted from the RLD 34, the memory bridge 32 may operate in a
similar fashion to the cRdShd case (reference numerals 214 and 216
and arrow 222 for the shared case of the victim block and reference
numerals 218 and 220 and arrow 222 for the modified case of the
victim block). If the RLD state is modified for the cache block A,
the memory bridge 32 may operate in a similar fashion to the cRdShd
case (reference numerals 224 and 226). If the RLD state is shared
for the cache block A, the memory bridge 32 may generate Kill
commands for each remote sharing node (reference numeral 228). The
memory bridge 32 may wait for the Kill_Ack commands from the remote
sharing nodes (reference numeral 230), and then transmit the Fill
command with the cache block A provided on the interconnect 22 in
response to the RdExc transaction (reference numeral 212).
[0103] In response to a Wr command to the cache block A, the memory
bridge 32 may generate a Wr transaction on the interconnect 22
(reference numeral 240). If the RLD state is invalid for the cache
block A, the memory bridge 32 may transmit the write data on the
interconnect 22 and the Wr command is complete (reference numeral
242). If the RLD state is shared for the cache block A, the memory
bridge 32 may generate Kill commands to each remote sharing node
(reference numeral 244) and collect the Kill_Ack commands from
those remote nodes (reference numeral 246) in addition to
transmitting the data on the interconnect 22. If the RLD state is
modified for a remote node, the memory bridge 32 may generate a
Flush command to the remote node (reference numeral 248) and
receive the WB command from the remote node (reference numeral
250). In one embodiment, the memory bridge 32 may delay
transmitting the write data on the interconnect 22 until the WB
command or Kill_Ack commands are received (although the data
returned with the WB command may be dropped by the memory bridge
32).
[0104] The above commands are received by the memory bridge 32 for
cache blocks for which the node 10 including the memory bridge 32
is the home node. The memory bridge 32 may also receive Flush
commands or Kill commands for cache blocks for which the node 10 is
a remote node. In response to a Flush command to the cache block A
(reference numeral 260), the memory bridge 32 may initiate a RdInv
transaction on the interconnect 22. If the local state of the cache
block is modified, the memory bridge 32 may transmit a WB command
to the home node, with the cache block supplied on the interconnect
22 in response to the Rdlnv transaction (reference numeral 262). If
the local state of the cache block is not modified, the memory
bridge 32 may not respond to the Flush command (reference numeral
264). In this case, the node may already have transmitted a WB
command to the home node (e.g. in response to evicting the cache
block locally). In response to a Kill command to the cache block A
(reference numeral 270), the memory bridge 32 may initiate a RdKill
transaction on the interconnect 22. The memory bridge 32 may
respond to the Kill command with a Kill_Ack command (reference
numeral 272).
[0105] In one embodiment, the memory bridge 32 may also be
configured to receive a non-cacheable read (RdNC) command (e.g.
corresponding to a standard HT read) (reference numeral 280). In
response, the memory bridge 32 may initiate a RdShd transaction on
the interconnect 22. If the RLD state is modified for the cache
block including the data to be read, the memory bridge 32 may
transmit a Flush command to the remote node having the modified
cache block (reference numeral 282), and may receive the WB command
from the remote node (reference numeral 284). Additionally, the
memory bridge 32 may supply data received on the interconnect 22 in
response to the RdShd transaction as a read response (RSP) to the
requesting node (reference numeral 286).
[0106] Computer Accessible Medium
[0107] Turning next to FIG. 10, a block diagram of a computer
accessible medium 300 including one or more data structures
representative of the circuitry included in the node 10 is shown.
Generally speaking, a computer accessible medium may include
storage media such as magnetic or optical media, e.g., disk,
CD-ROM, or DVD-ROM, volatile or non-volatile memory media such as
RAM (e.g. SDRAM, RDRAM, SRAM, etc.), ROM, etc., as well as media
accessible via transmission media or signals such as electrical,
electromagnetic, or digital signals, conveyed via a communication
medium such as a network and/or a wireless link.
[0108] Generally, the data structure(s) of the circuitry on the
computer accessible medium 300 may be read by a program and used,
directly or indirectly, to fabricate the hardware comprising the
circuitry. For example, the data structure(s) may include one or
more behavioral-level descriptions or register-transfer level (RTL)
descriptions of the hardware functionality in a high level design
language (HDL) such as Verilog or VHDL. The description(s) may be
read by a synthesis tool which may synthesize the description to
produce one or more netlist(s) comprising lists of gates from a
synthesis library. The netlist(s) comprise a set of gates which
also represent the functionality of the hardware comprising the
circuitry. The netlist(s) may then be placed and routed to produce
one or more data set(s) describing geometric shapes to be applied
to masks. The masks may then be used in various semiconductor
fabrication steps to produce a semiconductor circuit or circuits
corresponding to the circuitry. Alternatively, the data
structure(s) on computer accessible medium 300 may be the
netlist(s) (with or without the synthesis library) or the data
set(s), as desired. In yet another alternative, the data structures
may comprise the output of a schematic program, or netlist(s) or
data set(s) derived therefrom.
[0109] While computer accessible medium 300 includes a
representation of the node 10, other embodiments may include a
representation of any portion of the node 10 (e.g. processors
12A-12N, memory controller 14, L2 cache 36, interconnect 22, memory
bridge 32, remote line directory 34, switch 18, interface circuits
22A-22C, etc.).
[0110] Numerous variations and modifications will become apparent
to those skilled in the art once the above disclosure is fully
appreciated. It is intended that the following claims be
interpreted to embrace all such variations and modifications.
* * * * *