U.S. patent application number 14/230161 was filed with the patent office on 2014-07-31 for managing multicast distribution using multicast trees.
This patent application is currently assigned to JUNIPER NETWORKS, INC.. The applicant listed for this patent is JUNIPER NETWORKS, INC.. Invention is credited to Venu IYENGAR, Yong LUO, Vikram Rautela, Abhishek TRIPATHI.
Application Number | 20140211797 14/230161 |
Document ID | / |
Family ID | 50441482 |
Filed Date | 2014-07-31 |
United States Patent
Application |
20140211797 |
Kind Code |
A1 |
LUO; Yong ; et al. |
July 31, 2014 |
MANAGING MULTICAST DISTRIBUTION USING MULTICAST TREES
Abstract
A method includes receiving multicast traffic intended for host
devices; identifying a flow associated with the multicast traffic;
retrieving information associated with a group of multicast trees,
where the group of multicast trees includes information associated
with a group of I/O units, associated with a network node;
identifying a particular tree that corresponds to the identified
flow, where the particular tree includes information associated
with a set of I/O units; and transferring the multicast traffic to
an I/O unit, of the set of I/O units, based on the identification
of the particular tree, where the transferring enables the I/O unit
to send a copy of the multicast traffic to other I/O units of the
set of I/O units, and the set of I/O units to process the multicast
traffic in a manner that utilizes bandwidth or processing resources
in a controlled manner and to send a copy of the multicast traffic
to each of the host devices.
Inventors: |
LUO; Yong; (Sunnyvale,
CA) ; Rautela; Vikram; (Morgan Hill, CA) ;
TRIPATHI; Abhishek; (Bangalore, IN) ; IYENGAR;
Venu; (Bangalore, IN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
JUNIPER NETWORKS, INC. |
Sunnyvale |
CA |
US |
|
|
Assignee: |
JUNIPER NETWORKS, INC.
Sunnyvale
CA
|
Family ID: |
50441482 |
Appl. No.: |
14/230161 |
Filed: |
March 31, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
12961960 |
Dec 7, 2010 |
8699486 |
|
|
14230161 |
|
|
|
|
Current U.S.
Class: |
370/390 |
Current CPC
Class: |
H04L 45/16 20130101;
H04L 12/18 20130101 |
Class at
Publication: |
370/390 |
International
Class: |
H04L 12/18 20060101
H04L012/18 |
Claims
1-24. (canceled)
25. A method performed by a network device, the method comprising:
receiving, by a first input/output (I/O) unit of a plurality of I/O
units associated with the network device, multicast traffic
intended for a plurality of devices connected via the network
device; determining, by the first I/O unit and based on information
identifying a flow associated with the multicast traffic, whether
the multicast traffic is to be processed based on a first binary
tree or a second binary tree, a root node, of the first binary
tree, being associated with a second I/O unit of the plurality of
I/O units, a root node, of the second binary tree, being associated
with a third I/O unit of the plurality of I/O units; and
selectively sending, by the first I/O unit, the multicast traffic
to: the second I/O unit to be processed based on the first binary
tree when the multicast traffic is to be processed based on the
first binary tree, or the third I/O unit to be processed based on
the second binary tree when the multicast traffic is to be
processed based on the second binary tree.
26. The method of claim 25, where a leaf node, of the first binary
tree, corresponds to the root node of the second binary tree.
27. The method of claim 25, further comprising: obtaining, from a
packet of the multicast traffic, information associated with the
flow; and generating the information identifying the flow based on
the information associated with the flow, where the information
associated with the flow includes at least one of: a source
address, a destination address, information identifying a type of
the multicast traffic, or information identifying a type of service
associated with the multicast traffic.
28. The method of claim 25, where the multicast traffic is sent to
the second I/O unit to be processed based on the first binary tree,
and where the method further comprises: replicating, using the
second I/O unit, the multicast traffic to generate copies of the
multicast traffic; forwarding, using the second I/O unit, each of
the copies of the multicast traffic to a respective one of a fourth
I/O unit and a fifth I/O unit of the plurality of I/O units, the
fourth I/O unit and the fifth I/O unit being associated with leaf
nodes of the first binary tree; and generating, using the fourth
I/O unit and the fifth I/O unit, additional copies of the copies of
the multicast traffic to be sent to the plurality of devices.
29. The method of claim 25, where determining whether the multicast
traffic is to be processed based on the first binary tree or the
second binary tree includes: determining whether the information
identifying the flow corresponds to the first binary tree or the
second binary tree.
30. The method of claim 25, where selectively sending the multicast
traffic includes: retrieving information associated with the first
binary tree when the multicast traffic is to be processed based on
the first binary tree, or retrieving information associated with
the second binary tree when the multicast traffic is to be
processed based on the second binary tree.
31. A device comprising: a memory; and a plurality of input/output
(I/O) units, a first I/O unit, of the plurality of I/O units, to:
receive traffic, determine, based on information identifying a flow
associated with the traffic, that the traffic is to be processed
based on a tree of a plurality of trees, a root node, of the tree,
being associated with a second I/O unit of the plurality of I/O
units, a leaf node, of the tree, being associated with a third I/O
unit of the plurality of I/O units, retrieve information associated
with the tree from a data structure, the data structure storing
information associated with the plurality of trees, and send the
traffic to the second I/O unit to be processed based on the
information associated with the tree.
32. The device of claim 31, where the traffic corresponds to
multicast traffic, and where the second I/O unit is to: replicate
the multicast traffic to generate copies of the multicast traffic,
and send each of the copies of the multicast traffic to a
respective one of the third I/O unit and a fourth I/O unit of the
plurality of I/O units, the fourth I/O unit being associated with
another leaf node of the tree.
33. The device of claim 31, where the flow is associated with a
plurality of nodes, and where the second I/O unit is to send a copy
of the traffic to the third I/O unit, and where the third I/O unit
is to replicate the traffic for one or more of the plurality of
nodes.
34. The device of claim 31, where the second I/O unit is to:
replicate the traffic to generate copies of the traffic, and send
each of the copies of the traffic to a fourth I/O unit, of the
plurality of I/O units, for transmission to a plurality of nodes,
where the fourth I/O unit is not associated with the tree.
35. The device of claim 31, where the first I/O unit is further to:
obtain, from one or more packets associated with the traffic,
information associated with the flow, and generate the information
identifying the flow based on the information associated with the
flow.
36. The device of claim 31, where the first I/O unit is further to:
determine, based on information identifying another flow associated
with another traffic, that the other traffic is to be processed
based on another tree of the plurality of trees, a root node, of
the other tree, being associated with a fourth I/O unit of the
plurality of I/O units, a leaf node, of the other tree, being
associated with a fifth I/O unit of the plurality of I/O units;
retrieve information associated with the other tree from the data
structure; and send the traffic to the fourth I/O unit to be
processed based on the information associated with the other
tree.
37. A system comprising: a network device to: receive, by a first
input/output (I/O) unit of a plurality of I/O units associated with
the network device, traffic; determine, by the first I/O unit and
based on information identifying a flow associated with a portion
of the traffic, whether the portion of the traffic is to be
processed based on a first tree or a second tree, a root node, of
the first tree, being associated with a second I/O unit of the
plurality of I/O units, a root node, of the second tree, being
associated with a third I/O unit of the plurality of I/O units; and
selectively send, by the first I/O unit, the traffic to: the second
I/O unit to be processed based on the first tree when the traffic
is to be processed based on the first tree, or the third I/O unit
to be processed based on the second tree when the traffic is to be
processed based on the second tree.
38. The system of claim 37, where the network device is further to:
determine, by the first I/O unit and based on information
identifying another flow associated with another portion of the
traffic, that the other portion of traffic is to be processed based
on a third tree, a root node, of the third tree, being associated
with a fourth I/O unit of the plurality of I/O units, a leaf node,
of the third tree, being associated with a fifth I/O unit of the
plurality of I/O units, and a type associated with the third tree
being different than a type associated with the first tree and the
second tree.
39. The system of claim 38, where the first tree and the second
tree correspond to binary trees.
40. The system of claim 38, where the network device is further to:
retrieve information associated with the third tree from a data
structure, the data structure storing information associated with a
plurality of trees; and send, by the first I/O unit, the traffic to
the second I/O unit to be processed based on the information
associated with the third tree.
41. The system of claim 38, where the traffic corresponds to
multicast traffic, and where the network device is further to:
replicate, using the fourth I/O unit, the multicast traffic to
generate copies of the multicast traffic, and send, using the
fourth I/O unit, each of the copies of the multicast traffic to a
respective one of the fifth I/O unit and a sixth I/O unit of the
plurality of I/O units, the sixth I/O unit being associated with
another leaf node of the third tree.
42. The system of claim 37, where, when determining whether the
portion of the traffic is to be processed based on the first tree
or the second tree, the network device is to: determine whether the
information identifying the flow corresponds to the first tree or
the second tree.
43. The system of claim 37, where a leaf node, of the first binary
tree, corresponds to the root node of the second binary tree.
44. The system of claim 37, where the traffic is sent to the second
I/O unit to be processed based on the first binary tree, and where
the network device is further to: replicate, using the second I/O
unit, the traffic to generate copies of the traffic; forward, using
the second I/O unit, each of the copies of the traffic, to a
respective one of a fourth I/O unit and a fifth I/O unit of the
plurality of I/O units, the fourth I/O unit and the fifth I/O unit
being associated with leaf nodes of the first binary tree; and
forward, using the fourth I/O unit and the fifth I/O unit, the
copies of the traffic to the plurality of devices.
Description
BACKGROUND
[0001] In an increasingly networked world, more and more traffic,
such as data, voice, and video, is transmitted over public and
proprietary networks. The public or private networks process
traffic based on the type of traffic (e.g., text, data, video,
etc.), a particular quality of service (QoS) with respect to the
traffic, a category of traffic (e.g., unicast, broadcast, or
multicast traffic), etc. When processing broadcast and/or multicast
traffic, network nodes within the public or private networks may
replicate received traffic in order to transmit a copy of the
traffic to host devices that are authorized to receive the
traffic.
[0002] Unfortunately, when performing replications associated with
multicast traffic, network nodes may not fully utilize the
bandwidth and/or processing capacity of the network nodes.
Additionally, network nodes may generate more copies of the traffic
than are to be transmitted to the host devices, which may cause the
network nodes to waste bandwidth resources and/or processing
capacity. Wasting the bandwidth resources and/or processing
capacity may cause the network nodes to become congested and/or
reduce the throughput of the public or private networks.
SUMMARY
[0003] According to one aspect, a method may include receiving, by
a first input/output (I/O) unit of a group of I/O units associated
with a network node, multicast traffic intended for a group of host
devices; identifying, by the first I/O unit, a flow associated with
the multicast traffic based on information obtained from the
multicast traffic; retrieving, by the first I/O unit and from a
memory associated with the network node, information associated
with two or more multicast trees, where the two or more multicast
trees may include information associated with all or a portion of
the group of I/O units, associated with the network node, that
process multicast traffic. The method may also include identifying,
by the first I/O unit, a particular tree, of the two or more
multicast trees, that corresponds to the identified flow, where the
particular tree may include information associated with one or more
I/O units of the group of I/O units; and transferring, by the first
I/O unit, the multicast traffic to a second I/O unit, of the one or
more I/O units, based on the identification of the particular tree,
where the transferring may enable the second I/O unit to distribute
a copy of the transferred multicast traffic to other I/O units of
the one or more I/O units, and the one or more I/O units to process
the multicast traffic in a manner that utilizes bandwidth or
processing resources in a controlled manner and to send a copy of
the multicast traffic to each of the group of host devices based on
the processing.
[0004] According to another aspect, a network node may include a
memory to store information associated with a group of sub trees
that correspond to a group of I/O units associated with the network
node; and a switch fabric to transfer multicast traffic between the
group of I/O units. The network node may also include a first I/O
unit, of the group of I/O units to receive multicast traffic
intended for a group of host devices connected to the network node;
identify one or more of the group of sub trees with which to
process the multicast traffic based on one or more flows associated
with the multicast traffic; send, to a second I/O unit of the group
of I/O units and via the switch fabric, a portion of the multicast
traffic associated with one of the one or more flows that
corresponds to a sub tree of the group of sub trees, where the sub
tree may include information associated with the second I/O unit
and a set of other I/O units of the group of I/O units. The network
node may further include the second I/O unit to receive the portion
of the multicast traffic; send a copy of the portion of the
multicast traffic to the set of I/O units that permits the second
I/O unit and the set of I/O units to generate a group of copies to
be outputted to one or more of the group of host devices.
[0005] According to yet another aspect, a method performed by a
network node may include receiving, by a first input/output (I/O)
unit associated with the network node, multicast traffic intended
for a group of host devices connected to the network node; and
identify whether the multicast traffic is to be processed based on
a binary tree or an inverse binary tree based on a flow associated
with the multicast traffic. The binary tree may include a first
root node associated with a second I/O unit and a group of leaf
nodes, associated with two or more I/O units, of a group of I/O
units associated with the network node, that are logically
connected to the first root node. The inverse binary tree may
include a second root node associated with a third I/O unit and a
group of other leaf nodes, associated with two or more other I/O
units of the group of I/O units, that are logically connected to
the second root node. The method may also include sending, by the
first I/O unit and to the second I/O unit, the multicast traffic to
be processed based on a determination that the identified flow
corresponds to the binary tree, where the sending to the second I/O
unit may permit the second I/O unit and the two or more I/O units
to process the multicast traffic in a manner that enables a copy of
the multicast traffic to be outputted to the group of host devices.
The method may further include sending, by the first I/O unit and
to the third I/O unit, the multicast traffic to be processed based
on a determination that the identified flow corresponds to the
inverse binary tree, where the sending to the third I/O unit may
permit the third I/O unit and the two or more other I/O units to
process the multicast traffic in a manner that enables a copy of
the multicast traffic to be outputted to the group of host
devices.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The accompanying drawings, which are incorporated in and
constitute a part of this specification, illustrate one or more
implementations described herein and, together with the
description, explain these implementations. In the drawings:
[0007] FIG. 1 is a diagram illustrating an example environment in
which systems and/or methods, described herein, may be
implement;
[0008] FIG. 2 is a diagram illustrating example components of a
network node in the environment shown in FIG. 1;
[0009] FIG. 3 is a diagram illustrating example components of an
I/O unit depicted in FIG. 2;
[0010] FIG. 4 is a diagram illustrating an example a load balancing
operation, using a binary tree pair, according to an implementation
described herein;
[0011] FIG. 5 is a diagram illustrating an example process for
processing multicast traffic using a binary tree pair;
[0012] FIGS. 6A and 6B are diagrams illustrating example sub tree
data structures used by the network node in the environment shown
in FIG. 1;
[0013] FIG. 7 is a diagram illustrating an example load balancing
operation, using a group of sub trees, according to an
implementation described herein; and
[0014] FIG. 8 is a flow chart illustrating an example process for
processing multicast traffic using a group of sub trees.
DETAILED DESCRIPTION
[0015] The following detailed description refers to the
accompanying drawings. The same reference numbers in different
drawings may identify the same or similar elements. Also, the
following detailed description does not limit the invention.
[0016] Systems and/or methods described herein may include
techniques for processing multicast traffic in a manner that does
not cause excess copies of the traffic to be generated by a network
node and/or that balances processing capacity and/or utilization of
bandwidth recourses among packet forwarding components of the
network node. As described herein, a load balancing application may
distribute the incoming multicast traffic to a packet forwarding
component (e.g., an input/output (I/O) component, a packet
forwarding engine (PFE), etc.), within the network node, to be
processed based on a flow with which the incoming multicast traffic
is associated. The term "flow," as used herein, may include a group
of multicast packets associated with common attributes, such as a
destination address, a source address, quality of service (QoS), a
traffic type (e.g., text, video, voice, data, etc.), a type of
service (e.g., messaging, security, operations and maintenance,
etc.), etc.
[0017] Load balancing processor 310 may perform a load balancing
operation on the incoming multicast traffic (e.g., using a
multicast tree and/or a group of multicast trees as described in
detail below) in order to control and/or manage the manner in which
the incoming multicast traffic is to be distributed for processing.
Additionally, or alternatively the load balancing operation may
cause a packet forwarding component to generate copies of the
multicast traffic intended for recipients (e.g., host devices)
included in a group membership corresponding to the multicast
traffic and without generating excess copies of the multicast
traffic that are not to be sent to the intended recipients. By not
generating the excess copies of the multicast traffic, bandwidth
and/or processing resources within the network node may be
preserved and/or allocated to perform other functions.
[0018] As described herein, distribution of the incoming multicast
traffic based on the flow enables bandwidth resources and/or
processing capacity of the network node to be apportioned and/or
balanced among the packet forwarding components associated with a
network node. Additionally, apportioning and/or balancing the
processing among the packet forwarding components may enable the
network node to avoid becoming congested by preventing a particular
packet forwarding component from becoming over-tasked (e.g., when
processing and/or available bandwidth capacity has been
reached).
[0019] FIG. 1 is a diagram illustrating an example environment 100
in which systems and/or methods described herein may be
implemented. As shown in FIG. 1, environment 100 may include a
group of network nodes 110-1, . . . , node 110-Q (where Q.gtoreq.1)
(hereinafter collectively referred to as "nodes 110" and
individually as "node 110"), a group of host devices 115-1, . . . ,
115-P (where P.gtoreq.1) (hereinafter collectively referred to as
"hosts 115" and individually as "host 115"), and interconnected by
a network 120. FIG. 1 shows an example of devices that may be
included in environment 100. In other implementations, environment
100 may include fewer devices, different devices, differently
arranged devices, or additional devices than depicted in FIG. 1. In
still other implementations, one or more devices of environment 100
may perform one or more tasks described as being performed by one
or more other devices of environment 100.
[0020] Node 110 may include a network device that transmits data
traffic. For example, node 110 may take the form of a routing
device, a switching device, a multiplexing device, or a device that
performs a combination of routing, switching, and/or multiplexing
functions. In one implementation, node 110 may be a digital device.
In another implementation, node 110 may be an optical device. In
yet another implementation, node 110 may be a combination of a
digital device and an optical device.
[0021] Node 110 may include an internal or external storage device
and/or memory that stores information associated with node 110
operations. In one example, node 110 may store, in the storage
device and/or memory, network topology information, routing tables
and/or packet forwarding tables. In another example, node 110 may
store, in the storage device and/or memory, information associated
with a virtual local area network (VLAN) hosted by node 110.
[0022] In one example, node 110 may process incoming multicast
traffic. For example, node 110 may receive incoming multicast
traffic and may generate copies of the traffic to be transmitted to
other nodes 110 and/or hosts 115 based on which other nodes 110
and/or hosts 115 are included in a membership group associated with
the multicast traffic. When processing the traffic, node 110 may
perform a load balancing operation that enables node 110 to
replicate the incoming traffic in a manner that does not create
excess copies of the traffic that are not destined for intended
recipients (e.g., hosts 115) and/or that processes the traffic
based on a particular flow with which the traffic is
associated.
[0023] Host 115 may include any computation or communication
device, such as a wireless mobile communication device that is
capable of communicating via network 120. For example, host 115 may
include a radiotelephone, a personal communications system (PCS)
terminal (e.g., that may combine a cellular radiotelephone with
data processing and data communications capabilities), a personal
digital assistant (PDA) (e.g., that can include a radiotelephone, a
pager, Internet/intranet access, etc.), a laptop computer, a
personal computer, a landline telephone, a set top box (STB), a
television, a camera, a personal gaming system, or another type of
computation or communication device. In another implementation,
host 115 may be a server device that may gather, process, search,
store, and/or provide information in a manner similar to that
described herein.
[0024] Host 115 may be associated with unique identification
information, such as a device identifier (e.g., a STB identifier,
an IP address, a MAC address, an international mobile subscriber
identity (IMSI), a national access identifier (NAI), etc.), a
public identifier (e.g., a mobile device number (MDN), a landline
device number (LDN), a mobile subscriber integrated services
digital network (MSISDN), etc.), that may permit node 110 to
distinguish between hosts 115 and/or determine group membership
associated with multicast traffic. Additionally, or alternatively,
host 115 may, for example, receive multicast traffic from node 110.
In another example, host 115 may send Internet Group Management
Protocol (IGMP) traffic to node 110 that includes multicast
membership group requests to receive particular multicast traffic
(e.g., a join) or to stop receiving the particular multicast
traffic (e.g., a leave).
[0025] Network 120 may include one or more wired and/or wireless
networks. For example, network 120 may include a cellular network,
a public land mobile network (PLMN), a second generation (2G)
network, a third generation (3G) network, a fourth generation (4G)
network (e.g., a long term evolution (LTE) network), a fifth
generation (5G) network, and/or another network. Additionally, or
alternatively, network 120 may include a wide area network (WAN), a
metropolitan network (MAN), a telephone network (e.g., the Public
Switched Telephone Network (PSTN)), an ad hoc network, an intranet,
the Internet, a fiber optic-based network, and/or a combination of
these or other types of networks.
[0026] FIG. 2 is a diagram illustrating example components of node
110. In this example implementation, node 110 may take the form of
a router, although the systems and/or methods herein may be
implemented in another type of network device. For example, node
110 may include another data transfer device, such as a gateway, a
switch, a firewall, a network interface card (NIC), a hub, a
bridge, a proxy server, an optical add-drop multiplexer (OADM), or
some other type of device that processes and/or transfers
traffic.
[0027] Although, FIG. 2 illustrates example components of node 110,
in other implementations, node 110 may include additional
components, fewer components, different components, or differently
arranged components than those illustrated in FIG. 2 and described
herein. Additionally, or alternatively, one or more operations
described as being performed by a particular component of node 110
may be performed by one or more other components, in addition to or
instead of the particular component of node 110.
[0028] Node 110 may receive network traffic, as one or more packet
stream(s), from physical links, may process the packet stream(s) to
determine destination information, and may transmit the packet
stream(s) out on links in accordance with the destination
information. Node 110 may include a controller 210, a set of
input/output (I/O) units 220-1, 220-2, . . . , 220-M (where
M.gtoreq.1) (hereinafter referred to collectively as "I/O units
220" and individually as "I/O unit 220"), and a switch fabric
230.
[0029] Controller 210 may include a processor, a microprocessor, or
some form of hardware logic (e.g., an application specific
integrated circuit (ASIC) or a field programmable gate array
(FPGA)). In one example implementation, controller 210 may include
an Ethernet controller and/or another controller device. Controller
210 may perform high level management functions for node 110. For
example, controller 210 may maintain the connectivity and manage
information/data necessary for transferring packets by node 110.
Controller 210 may create routing tables based on network topology
information, create forwarding tables based on the routing tables,
and communicate the forwarding tables to I/O units 220. I/O units
220 may use the forwarding tables to perform route lookup for
incoming packets and perform the forwarding functions for node 110.
Controller 210 may also perform other general control and
monitoring functions for node 110. In one example, controller 210
may generate multicast routing tables based on multicast membership
information. In another example, controller 210 may generate and/or
update multicast trees, such as binary trees or sub trees (e.g., as
described in detail below) which may be used when processing
multicast traffic. Controller 210 may forward the multicast
forwarding tables and/or information associated with multicast
trees to I/O units 220.
[0030] I/O unit 220 may include a component or collection of
components to receive packets, to process incoming and/or outgoing
packets, and/or to transmit outgoing packets. For example, I/O unit
220 may include I/O ports, a packet forwarding component, an
Ethernet interface and/or another type of interface, a central
processing unit (CPU), and/or a memory device. I/O unit 220 may
include a collection of ports that receive or transmit packets via
physical links. I/O unit 220 may also include packet processing
component(s), switch interface component(s), Internet processor
component(s), memory device(s), etc. Each of I/O units 220 may be
connected to controller 210 and switch fabric 230. I/O units 220
may receive packet data on physical links connected to a network
(e.g., network 120). Each physical link could be one of many types
of transport media, such as an optical fiber or an Ethernet
cable.
[0031] I/O units 220 may process incoming packet data prior to
transmitting the data to another I/O unit 220 or the network. I/O
units 220 may perform route lookups for the data using the
forwarding table from controller 210 to determine destination
information. If the destination indicates that the data should be
sent out on a physical link, connected to I/O unit 220, then I/O
unit 220 may prepare the data for transmission by, for example,
adding any necessary headers and/or modifying existing headers,
and/or transmitting the data from the port associated with the
physical link. If the destination indicates that the data should be
sent to another I/O unit 220 via switch fabric 230, then I/O unit
220 may, if necessary, prepare the data for transmission to the
other I/O unit 220 and/or may send the data to the other I/O unit
220 via switch fabric 230.
[0032] I/O units 220 may process incoming multicast traffic and may
perform a load balancing operation on the multicast traffic. For
example, I/O unit 220 receive multicast traffic and may, based on
the forwarding tables, determine that the multicast traffic is to
be sent, by I/O unit 220, to a particular next hop (e.g., another
I/O unit 220, another node 110, host 115, etc.). Based on the
determination, I/O unit 220 may generate a copy of the multicast
traffic for transmission to the next hop. I/O unit 220 may
determine to which I/O unit 220 the multicast traffic is to be sent
based on a flow to which the traffic corresponds. I/O unit 220 may
send the multicast traffic to a particular I/O unit 220, via switch
fabric 230, based on a multicast tree that corresponds to the flow.
The particular I/O unit 220 may receive the multicast traffic and
may send replication notifications to other I/O units 220 based on
the multicast tree. The particular I/O unit 220 and the other I/O
units 220 may receive the notifications and may replicate the
multicast traffic based on group membership associated with the
multicast traffic and may forward copies of the multicast traffic
to other nodes 110 and/or hosts 115 based on the multicast group
membership and/or other I/O units 220 to be forwarded, by the other
I/O units 220, to the other nodes 110 and/or hosts 115.
[0033] Switch fabric 230 may include one or multiple switching
planes to facilitate communication among I/O units 220 and/or
controller 210. In one implementation, each of the switching planes
may include a single-stage switch or a multi-stage switch of
crossbar elements. Switch fabric 230 may also, or alternatively,
include processors, memories, and/or paths that permit
communication among I/O units 220 and/or controller 210.
[0034] FIG. 3 is a diagram illustrating example components of I/O
unit 220. Components of I/O unit 220 may particularly include
components to perform load balancing operations associated with
processing multicast traffic. As illustrated in FIG. 3, I/O unit
220 may include a group of input/output (I/O) ports 300-1, . . . ,
300-N and/or a packet forwarding component 305.
[0035] I/O ports 300 may be a point of attachment for a physical
link and/or may include a component to receive, transmit, and/or
process packets associated with traffic (e.g., multicast traffic
and/or other traffic) received from and/or sent to another node 110
and/or host 115. For example, I/O ports 300 may include an Ethernet
interface, an OC interface, an ATM interface, or another type of
interface. I/O ports 300 may include a variety of physical
interfaces via which packets can be received, can be transmitted,
or can be received and transmitted. In another implementation, I/O
ports 300 may collectively comprise a physical interface card
(PIC). I/O ports 300 may be connected to packet forwarding
component 305 and may send incoming packets to packet forwarding
component 305 and/or may receive outgoing packets from packet
forwarding component 305.
[0036] Packet forwarding component 305 may include a one or more
components to receive packets, process incoming and/or outgoing
packets, and/or transmit outgoing packets. For example, packet
forwarding component 305 may include an Ethernet interface and/or
another type of interface, a CPU, and/or a memory device. Packet
forwarding component 305 may include packet processing
component(s), switch interface component(s), Internet processor
component(s), memory device(s), etc. Alternatively, or
additionally, packet forwarding component 305 may include a load
balancing processor 310 and a replicator 315. Packet forwarding
component 305 may be interconnected with I/O ports 300 and/or
switch fabric 230, and/or other components within I/O unit 220.
[0037] Load balancing processor 310 may include a processor, a
microprocessor, or some form of hardware logic (e.g., an ASIC or a
FPGA) and/or a component or collection of components to perform a
load balancing operation on incoming or outgoing multicast traffic.
In one example implementation, load balancing processor 310 may
include software or logic associated with a load balancing
application that performs load balancing operations on incoming
multicast traffic.
[0038] Load balancing processor 310 may determine a quantity of
copies of the multicast traffic based on a group membership (e.g.,
stored in a memory associated with I/O unit 220) associated with
the multicast traffic. Load balancing processor 310 may, for
example, identify to which nodes 110 and/or hosts 115 a copy of the
multicast traffic is to be sent based on the group membership.
Additionally, or alternatively, load balancing processor 310 may
perform a load balancing operation on the multicast traffic by
determining to which other I/O units 220 the multicast traffic is
to be sent to be processed in order to send the copies of the
multicast traffic to the identified nodes 110 and/or hosts 115.
Sending the multicast traffic to other I/O units 220 to be
processed may enable bandwidth resources and/or processing
capacity, associated with replicating the multicast traffic (e.g.,
to generate the desired quantity of copies of the multicast
traffic), to be allocated and/or distributed to the other I/O units
220 in a controlled and/or managed manner and/or in a manner that
does not waste bandwidth and/or processing resources.
[0039] Load balancing processor 310 may receive multicast traffic
and may identify a flow with which the multicast traffic is
associated. The flow may be determined based on flow information
obtained from a header of packets associated with the multicast
traffic, such as a destination address, a source address, QoS, a
traffic type (e.g., text, video, voice, data, etc.), type of
service (e.g., messaging, security, operations and maintenance,
etc.), etc. In one example, load balancing processor 310 may
generate a unique value that corresponds to the flow using a hash
function and/or some other mathematical function based on the flow
information. Each flow within the multicast traffic may have a
unique value that may be used by load balancing processor 310 when
performing the load balancing operation.
[0040] In one example implementation, load balancing processor 310
may use the unique value (e.g., obtained using the hash function)
associated with the flow to determine the manner in which multicast
traffic is to be forwarded to other I/O units 220 based on one or
more multicast trees, such as a pair of binary trees (e.g.,
described in detail below in FIG. 4) that correspond to all or a
portion of the I/O units 220 that perform replication operations
associated with node 110. For example, a particular binary tree, of
the pair of binary trees, may be selected based on the unique value
that corresponds to the particular binary tree. Load balancing
processor 310 may forward the multicast traffic, via switch fabric
230, to another I/O unit 220 based on the selected binary tree.
[0041] In another example implementation, load balancing processor
310 may use the unique value associated with the flow determine the
manner in which the multicast traffic is to be forwarded to the
other I/O units 220 based on one or more other multicast trees,
such as a group of sub trees (e.g., described in detail below in
FIGS. 6 and 7) that correspond to all or a portion of the I/O units
220 that perform replication operations. For example, a particular
sub tree may be selected, from the group of sub trees, based on the
unique value that corresponds to the particular sub tree. Load
balancing processor 310 may forward the multicast traffic to
another I/O unit 220, via switch fabric 230, based on the selected
sub tree.
[0042] Load balancing processor 310 may determine that a copy of
the incoming multicast traffic is to be transmitted, by I/O unit
220, to a host device 110 and/or other node 110 in a manner that
does not include switch fabric 230. The load balancing application
may, for example, send a replication notification to replicator 315
that instructs replicator 315 to generate one or more copies of the
multicast traffic to be sent to the host device 110 and/or the
other node 110. In another example, forwarding component 305,
associated with I/O unit 220, may receive other multicast fabric
from another I/O unit 220 via switch fabric 230 and/or a
notification to perform a replication operation on the other
multicast traffic. Load balancing processor 310 may send another
replication notification to replicator 315 that instructs
replicator 315 to generate one or more copies of the other
multicast traffic to be sent to host devices 110 and/or the other
nodes 110 associated with a group membership associated with the
other multicast traffic.
[0043] Replicator 315 may include one or more components to perform
replication operations on multicast traffic. Replicator 315 may
receive instructions (e.g., a replication notification) from load
balancing processor 310 to perform a replication operation on
multicast traffic and replicator 315 may generate one or more
copies of multicast traffic in response to the instruction.
Replicator 315, may forward the one or more copies of the multicast
traffic to another I/O unit 220 via switch fabric 230, and/or to
node 110 and/or host 115 via I/O port 300. In another example, the
multicast traffic may be forwarded by load balancing processor 310,
forwarding component 305, and/or another component associated with
I/O unit 220.
[0044] Although, FIG. 3 illustrates example components of I/O unit
220, in other implementations, I/O unit 220 may include additional
components, fewer components, different components, or differently
arranged components than those illustrated in FIG. 3 and described
herein. Additionally, or alternatively, one or more operations
described as being performed by a particular component of I/O unit
220 may be performed by one or more other components, in addition
to or instead of the particular component of I/O unit 220.
[0045] FIG. 4 is a diagram illustrating an example a load balancing
operation 400 using a binary tree pair according to an
implementation described herein. For example, packet forwarding
component 305, associated with I/O unit 220, may receive incoming
multicast traffic, via I/O port 300, intended for nodes 110 and/or
hosts 115 (e.g., included within a group membership associated with
the multicast traffic). As shown in FIG. 4, the multicast traffic
may include one or more flows (e.g., shown as flows (A, B)). The
load balancing application, hosted by load balancing processor 310
within packet forwarding component 305, may determine with which
flow the multicast traffic is associated (e.g., based on flow
information associated with the multicast traffic) in a manner
similar to that described above (e.g., with respect to FIG. 3).
Load balancing processor 310 may, for example, use a hash function
and/or some other mathematical function to generate a unique value
associated with the flow based on the flow information obtained
from the multicast traffic. Load balancing processor 310 may use a
binary tree pair (e.g., binary tree pair (410)) to determine to
which other I/O unit 220 the multicast traffic (e.g., associated
with the identified flow) is to be forwarded for processing. More
particularly, load balancing processor 310 may select a binary tree
(e.g., T tree (420)) or T' tree (430)), from the binary tree pair,
that is to be used to process the multicast traffic based on the
identified flow to which the selected binary tree corresponds.
[0046] Binary tree pair 410 may permit multicast traffic to be
distributed among all or a portion of I/O units 220 within node 110
in order to process the multicast traffic in a manner that does not
cause a particular I/O unit 220 to be over tasked (e.g., where
maximum processing and/or bandwidth capacity is reached) and/or to
maximize traffic throughput via node 110. Binary tree pair 410 may
include information associated with T tree 420 and/or T' tree 430.
Binary tree pair 410 may enable load balancing processor 310 to
forward the multicast traffic, using the information associated
with T tree 420 or T' tree 430, based on the identified flow.
[0047] T tree 420 may include information associated with each I/O
unit 220 that is to perform replication operations within node 110
(e.g., I/O units 220-1, 220-2, 220-4 through 220-7, 220-9, and
220-M; shown as "1," "2," "4" through "7," "9", and "M" in FIG. 4).
T tree 420 may include a root node 422 and/or one or more leaf
nodes 424. Root node 422 may correspond to a particular I/O unit
220 (e.g., I/O unit 220-1; shown as "1" in FIG. 4)) and leaf nodes
424 may correspond to the other I/O units 220 within T tree 420
(e.g., I/O units 220-2, 220-4 through 220-7, 220-9, and 220-M;
shown as "2," "4" through "7", "9," and "M" in FIG. 4). T tree 420
may include logical interconnections between root node 422 (e.g.,
1) and a pair of leaf nodes 424 (e.g., 2 and 4). Additionally, the
pair of leaf nodes 424 may each be logically interconnected with
another pair of leaf nodes 424 (e.g., 5 and 6 corresponding to 2;
and 7 and 9 corresponding to 4). One or more of leaf nodes 424
(e.g., 5), associated with the other pair of leaf nodes 424, may be
logically interconnected with another leaf node 424 (e.g., M).
[0048] T' tree 430 may include information associated with each I/O
unit 220 that is included in the other binary tree (e.g., T tree
420) of binary tree pair 410. T' tree 430 may include a root node
432 and/or one or more leaf nodes 434. In an example
implementation, T' tree 430 may be an inverse of T tree 420. For
example, a particular leaf node 424 (e.g., M) of T tree 420 that is
the furthest logical distance from root node 422, within T tree
420, may be a root node (e.g., root node 432) within T' tree 430.
Conversely, root node 422 (e.g., 1), of T tree 420, may be a leaf
node 434 associated with a furthest logical distance (e.g., a
distance of three in this example) from root node 434 within T'
tree 430. Leaf nodes 422 that were the second furthest logical
distance within T tree 420 may, for example, be a pair leaf nodes
434 (e.g., 7 and 9) that are logically interconnected (e.g.,
associated with a distance of one) with root node 432. Leaf nodes
424 (e.g., 6) associated with the second furthest logical distance
within T tree 420 and/or that do not generate copies of multicast
traffic to be forwarded to other I/O units 220 may, for example, be
leaf node 434, within T' tree 430, that perform replication
operations in order to forward copies of the multicast traffic to
other I/O units 220. Leaf nodes 424 (e.g., 5) associated with the
second furthest logical distance within T tree 420 and/or that
generate copies of multicast traffic to be forwarded to other I/O
units 220 may, for example, be leaf node 434, within T' tree 430,
that do not perform replication operations to forward copies of the
multicast traffic to other I/O units 220.
[0049] If load balancing processor 310 identifies the multicast
traffic as being associated with flow (A), then load balancing
processor 310 may retrieve, from a memory associated with I/O unit
220, information associated with a binary tree that corresponds to
flow (A) (e.g., T tree 420). Based on the information associated
with the binary tree, load balancing processor 310 may forward, via
switch fabric 230, the incoming multicast traffic (e.g., associated
with flow (A)) to another I/O unit 220 (e.g., I/O unit 220-1) that
corresponds to root node 422 associated with the binary tree. It
should be appreciated that the forwarding of the incoming multicast
traffic is generally performed without replicating or generating a
copy of the multicast traffic, which preserves bandwidth and/or
processing resources associated with node 110.
[0050] The other I/O unit 220 may receive the multicast traffic and
may replicate the multicast traffic in order to send, via switch
fabric 230, a copy of the multicast traffic to a pair of I/O units
220 (e.g., I/O unit 220-2 and/or I/O unit 220-4) that correspond to
the pair of logically interconnected leaf nodes 424 identified in
the binary tree (e.g., T tree 420). The pair of I/O units 220 may
each replicate the multicast traffic in order to send, via switch
fabric 230, the copies of the multicast traffic to the other pairs
of I/O units 220 (e.g., I/O units 220-5 through 220-7, and/or
220-9) that correspond to the other pair of logically
interconnected leaf nodes 424 identified in the binary tree. I/O
units 220 associated with the other pair of logically
interconnected leaf nodes 424 may perform other replications of the
multicast traffic until all replicating I/O units 220 associated
with the binary tree have receive a copy of the multicast
traffic.
[0051] Each I/O unit 220, included in the binary tree, that
received a copy of the multicast traffic may generate additional
copies of the multicast traffic that corresponds to other
respective nodes 110 and/or respective hosts 115 that are included
in the membership group associated with the multicast traffic.
Additionally, or alternatively, each I/O unit 220 may send a copy
of the multicast traffic to the respective other nodes 110 and/or
hosts 115.
[0052] In another example, if load balancing processor 310
identifies the multicast traffic as being associated with flow (B),
then load balancing processor 310 may retrieve, from a memory
associated with I/O unit 220, information associated with another
binary tree that corresponds to flow (B) (e.g., T' tree 430). Based
on the information associated with the other binary tree (e.g., an
inverse binary tree), load balancing processor 310 may forward, via
switch fabric 230, the incoming multicast traffic (e.g., associated
with flow (B)) to another I/O unit 220 (e.g., I/O unit 220-M) that
corresponds to root node 432 associated with the binary tree. It
should be appreciated that the forwarding of the incoming multicast
traffic is generally performed without replicating or generating a
copy of the multicast traffic, which preserves bandwidth and/or
processing resources associated with node 110. Additionally, or
alternatively, processing the multicast traffic associated with
flow (B), using the other binary tree, increases a diversity in
which bandwidth and/or processing resources of node 110 are
distributed among I/O units 220 when processing multicast
traffic.
[0053] I/O unit 220-M may receive the multicast traffic and may
perform replication operations and/or forwarding operations (e.g.,
via switch fabric 230) to leaf nodes 434 in a manner similar to
that described above (e.g., with respect to T tree 420).
Alternatively, or additionally, leaf nodes 434, associated with T'
tree 430, may perform further replication and/or forwarding
operations (e.g., via switch fabric 230) and may send copies of the
multicast traffic (e.g., associated with flow (B), to other nodes
110 and/or hosts 115 that are included within a group membership
associated with the multicast traffic.
[0054] In yet another example, I/O unit 220 may receive the
incoming multicast traffic (e.g., associated with flow (A) and/or
flow (B)) and may perform replication operations on the multicast
traffic based on a determination that copies of the multicast
traffic are to be sent to other nodes 110 and/or hosts 115 in a
manner that does not include switch fabric 230.
[0055] FIG. 5 is a diagram illustrating an example process 500 for
processing multicast traffic using a binary tree pair. In one
example implementation, process 500 may be performed by node 110.
In another example implementation, some or all of process 500 may
be performed by another device or group of devices including or
excluding node 110.
[0056] As shown in FIG. 5, process 500 may include receiving
multicast traffic (block 505) and obtaining flow information
associated with received traffic (block 510). For example, node 110
may receive multicast traffic from network 120 and may obtain
information associated with a flow to which the traffic corresponds
from packets (e.g., packet headers, trailers, payloads, etc.)
associated with the traffic. The traffic may be received by I/O
unit 220 associated with node 110. The information associated with
the flow may include a source address, a destination address, a
flow identifier, a type of traffic (e.g., data, video, voice, text,
etc.), a type of service (e.g., a messaging service; a security
service; an operation, administration, and maintenance (OAM)
service; and/or some other type of service), and/or other
information associated the flow.
[0057] In one example, I/O unit 220 may retrieve information
associated with a group membership associated with the multicast
traffic and/or the flow to identify a quantity of nodes 110 and/or
hosts 115 that are to receive a copy of the traffic.
[0058] As also shown in FIG. 5, process 500 may include generating
a unique flow identifier based on the flow information (block 515)
and retrieving information associated with a selected binary tree
(block 520). For example, a load balancing application, hosted by
packet forwarding component 305 within I/O unit 220, may use a hash
function and/or some other mathematical function to generate a
unique identifier that corresponds to the flow. The identifier may
be generated, using the hash and/or mathematical function, based on
the information associated with the flow obtained from the
traffic.
[0059] Load balancing processor 310 may use the unique identifier
to select a binary tree that is to be used to process the traffic.
For example, load balancing processor 310 may determine whether the
unique identifier corresponds to a binary tree (e.g., T tree 420
with respect to FIG. 4) associated with a binary tree pair (e.g.,
binary tree pair 410 with respect to FIG. 4) or an inverse binary
tree (e.g., T' tree 430 with respect to FIG. 4) associated with the
binary tree pair. Based on the determination, load balancing
processor 310 may select the binary tree or the inverse binary tree
to process the traffic and may retrieve information associated with
the binary tree and/or the inverse binary tree from a memory
associated with node 110.
[0060] In another example implementation, load balancing processor
310 may uniquely identify the flow based on the information
associated with the flow, which may not include generating the
unique identifier. Load balancing processor 310 may, for example,
use the uniquely identified flow to select the binary tree or the
inverse binary tree to process the traffic.
[0061] As further shown in FIG. 5, if traffic is to be processed
using a binary tree (block 525--binary tree), then process 500 may
include sending the traffic to another I/O unit 220, via switch
fabric 230, for processing based on the binary tree (block 530).
For example, load balancing processor 310 may determine that the
unique identifier (or information associated with the flow)
corresponds to the binary tree (e.g., T tree 420 with respect to
FIG. 4) and may identify a root node (e.g., root node 422 of FIG.
4) within the binary tree. Based on the identification of the root
node, load balancing processor 310 may send the traffic, via switch
fabric 230, to another I/O unit 220, that corresponds to the root
node within the binary tree, to be processed. The other I/O unit
220 may receive the traffic, may replicate the traffic in order to
generate copies of the traffic, and may send a copy of the traffic
to a pair of other I/O units 220 that corresponds to a pair of leaf
nodes (e.g., leaf nodes 424) that are logically interconnected
(e.g., associated with a distance of one) with the other I/O unit
220.
[0062] Each I/O unit 220, of the pair of other I/O units 220, may
replicate the traffic and may send a copy of the traffic to
respective other pairs of I/O units 220 that are logically
interconnected (e.g., associated with a distance of two relative to
root node 422) with the pair of I/O units 220 to be processed. The
replicating and/or the sending of copies of the traffic may be
performed by additional pairs of I/O units 220 (e.g., associated
with increasing distances relative to root node 422) until all I/O
units 220, associated with the binary tree, have received a copy of
the traffic.
[0063] The other I/O unit 220 (e.g., corresponding to root node
422) that received the traffic and I/O units 220 (e.g.,
corresponding to leaf nodes 422) associated with the binary tree
that received a copy of the traffic, may perform one or more
replications to generate additional copies of the traffic to be
sent to nodes 110 and/or hosts 115 associated with the group
membership of the traffic. Alternatively, or additionally, each I/O
unit 220 may send a copy of the traffic to a respective node 110
and/or host 115 that each I/O unit 220 corresponds and/or is
interconnected.
[0064] As yet further shown in FIG. 5, if traffic is to be
processed using an inverse binary tree (block 525--inverse binary
tree), then process 500 may include sending the traffic to a
further I/O unit 220, via switch fabric 230, for processing based
on the inverse binary tree (block 535). For example, load balancing
processor 310 may determine that the unique identifier (or
information associated with the flow) corresponds to the inverse
binary tree (e.g., T' tree 430 with respect to FIG. 4) and may
identify a root node (e.g., root node 432 of FIG. 4) within the
inverse binary tree. Based on the identification of the root node,
load balancing processor 310 may send the traffic, via switch
fabric 230, to another I/O unit 220, that corresponds to the root
node within the inverse binary tree, to be processed. The other I/O
unit 220 may receive the traffic, may replicate the traffic in
order to generate copies of the traffic, and may send a copy of the
traffic to a pair of other I/O units 220 that corresponds to a pair
of leaf nodes (e.g., leaf nodes 434) that are logically
interconnected (e.g., associated with a distance of one) with the
other I/O unit 220.
[0065] Each I/O unit 220, of the pair of other I/O units 220, may
replicate the traffic and may send a copy of the traffic to
respective other pairs of I/O units 220 that are logically
interconnected (e.g., associated with a distance of two relative to
root node 432) with the pair of I/O units 220 to be processed. The
replicating and/or the sending of copies of the traffic may be
performed by additional pairs of I/O units 220 (e.g., associated
with increasing distances relative to root node 432) until all I/O
units 220, associated with the inverse binary tree, have received a
copy of the traffic.
[0066] The other I/O unit 220 (e.g., corresponding to root node
432) that received the traffic and I/O units 220 (e.g., corresponds
to leaf nodes 434) associated with the inverse binary tree that
received a copy of the traffic, may perform one or more
replications to generate additional copies of the traffic to be
sent to nodes 110 and/or hosts 115 associated with the group
membership of the traffic. Alternatively, or additionally, each I/O
unit 220 may send a copy of the traffic to a respective node 110
and/or host 115 that each I/O unit 220 corresponds and/or is
interconnected.
[0067] FIGS. 6A and 6B are diagrams illustrating example sub tree
data structure 600 (hereinafter referred to as "data structure
600") and sub tree data structure 630 (hereinafter referred to as
"data structure 630"). Data structure 600 and/or data structure 630
may be stored in a memory associated with node 110 and/or a memory
associated with all or a portion of I/O units 220 associated with
node 110. As illustrated in FIG. 6A, data structure 600 may include
a sub tree set field 605.
[0068] Sub tree set 605 may store information associated with a sub
tree set that includes a set of identifiers that correspond to I/O
units 220, which are used by load balancing processor 310 to
process multicast traffic. For example, sub tree set 610 may
include a set of identifiers that are associated with I/O units 220
that participate in replication operations (e.g., to generate a
copy of multicast traffic) associated with node 110. Set 610 may
include an identifier that corresponds to a root node 612. For
example, the identifier (e.g., "1") that corresponds to root node
612 may be associated with a particular I/O unit 220 (e.g., I/O
unit 220-1) that performs replication operations. Set 610 may
include other identifiers that correspond to leaf nodes 613. For
example, the identifiers (e.g., "2" and "3") that correspond to
leaf nodes 613 may be associated with other I/O units 220 (e.g.,
I/O unit 220-2 and I/O unit 220-3) that perform replication
operations and which may receive multicast traffic from I/O unit
220 corresponding to root node 612).
[0069] Data structure 600 may store information associated with
other sub tree sets (e.g., sets 614 through 622). Each of the other
sub tree sets may include an identifier that corresponds to a
respect root node. For example, set 614 may include a root node
that corresponds to I/O unit 220-2; set 616 may include a root node
that corresponds to I/O unit 220-3; set 618 may include a root node
that corresponds to I/O unit 220-4; set 620 may include a root node
that corresponds to I/O unit 220-5; and set 622 may include a root
node that corresponds to I/O unit 220-6. Alternatively, or
additionally, each of the other sub trees may include an identifier
that corresponds to respective leaf nodes. For example, set 614 may
include leaf nodes that correspond to I/O unit 220-3 and I/O unit
220-4; set 616 may include leaf nodes that correspond to I/O unit
220-4 and I/O unit 220-5; set 618 may include leaf nodes that
correspond to I/O unit 220-5 and I/O unit 220-6; set 620 may
include leaf nodes that correspond to I/O unit 220-6 and I/O unit
220-1; and set 622 may include leaf nodes that correspond to I/O
unit 220-1 and I/O unit 220-2.
[0070] The quantity of sub trees and/or I/O units 220 that are
included in the sub trees may be configured as hardware or software
by a user of node 110. For example, increasing a quantity of I/O
units 220 (e.g., leaf nodes) within a set (e.g., from 2 to 3, 4, 5,
etc.) may reduced a quantity of replications that are to be
performed on a per-I/O unit 220 basis, which may reduce an amount
of jitter that is introduced, by the replication operation, when
processing packets associated with the multicast traffic. However,
increasing the quantity of I/O units 220 in each set may increase a
quantity of replication notifications (e.g., when copies of
multicast traffic is transferred from the root node to leaf nodes),
which may increase bandwidth utilization associated with node 110.
Additionally, a degree of overlap between leaf nodes between sets,
as described below in FIG. 6B, may also be configured as hardware
associated with node 110 or via software by a user of node 110.
[0071] As illustrated in FIG. 6B, data structure 630 may include a
sub tree set field 635. In a manner similar to that described above
(e.g., with respect to FIG. 6A), sub tree set field 635 may store
information associated with sub tree sets (e.g., sets 640 through
655). Each of the sub tree sets may include an identifier,
associated with a particular I/O unit 220 that corresponds to a
respective root node. For example, set 640 may include a root node
that corresponds to I/O unit 220-1; set 645 may include a root node
that corresponds to I/O unit 220-3; set 650 may include a root node
that corresponds to I/O unit 220-5; and set 655 may include a root
node that corresponds to I/O unit 220-2. Alternatively, or
additionally, each of the sub trees may include an identifier that
corresponds to respective leaf nodes. For example, set 640 may
include leaf nodes that correspond to I/O unit 220-2 and I/O unit
220-3; set 645 may include leaf nodes that correspond to I/O unit
220-4 and I/O unit 220-5; set 650 may include leaf nodes that
correspond to I/O unit 220-6 and I/O unit 220-1; and set 655 may
include leaf nodes that correspond to I/O unit 220-4 and I/O unit
220-6.
[0072] Each of the I/O units 220 are included in two sub tree sets
of data structure 630 (e.g., an overlap value, "R"=2) as compared
to three sub tree sets of data structure 600 of FIG. 6A (e.g., an
overlap value, R=3). For example, I/O unit 220-1 is included in
sets 640 and 650 of data structure 630 and is included in sets 610,
620, and 622 of data structure 600. Therefore, the quantity of sub
tree sets in which a particular I/O unit 220 is included may be set
by hardware associated with node 110 and/or by software by a user
of node 110. Increasing the overlap value may increase a quantity
of combinations in which replication operations, and thus bandwidth
and/or processing capacity, can be distributed and/or allocated
among I/O units 220 of node 110. Alternatively, or additionally,
one or more sub tree sets, within data structure 600 or data
structure 630, can be duplicated to increase a quantity of sub tree
sets to be used by node 110 when processing multicast traffic.
[0073] Although FIGS. 6A and 6B show example fields of data
structure 600 and data structure 630, in other implementations,
data structure 600 and/or data structure 630 may contain different
fields and/or additional fields than depicted in FIGS. 6A and/or
6B. In other implementations, there may be additional sub trees,
fewer sub trees, differently arrange sub trees, or different sub
trees than depicted in FIGS. 6A and/or 6B. Additionally, or
alternatively, in other implementations, the sub tree sets may
include additional leaf nodes, fewer leaf nodes, different leaf
nodes, or differently arranged leaf nodes than depicted in FIGS. 6A
and/or 6B. For example, the quantity and/or configuration of sub
trees, within data structure 600 and/or data structure 630, may
change when the quantity of I/O units 220 performing replications
changes, when the overlap value changes, when the quantity of leaf
nodes in each set changes, when a quantity of duplicate sets
changes, etc.
[0074] FIG. 7 is a diagram illustrating an example load balancing
operation 700, using a group of sub trees, according to an
implementation described herein. For example, packet forwarding
component 305, associated with I/O unit 220, may receive incoming
multicast traffic, via I/O port 300, intended for nodes 110 and/or
hosts 115 based on a group membership associated with the multicast
traffic. As shown in FIG. 7, the multicast traffic may include one
or more flows (e.g., shown as flows (A, B, C, D, E, F)). The load
balancing application, hosted by load balancing processor 310
within packet forwarding component 305, may determine with which
flow particular multicast traffic is associated (e.g., based on
flow information associated with the multicast traffic) in a manner
similar to that described above (e.g., with respect to FIG. 3).
Load balancing processor 310 may using a hash function and/or some
other mathematical function to generate a unique value associated
with the flow based on the flow information obtained from the
multicast traffic. Load balancing processor 310 may use information
associated with a sub tree, obtained from a sub tree data structure
(e.g., data structure 600 of FIG. 6A) to determine to which other
I/O unit 220 the multicast traffic (e.g., associated with the
identified flow) is to be forwarded for processing. More
particularly, load balancing processor 310 may select a sub tree
(e.g., sub tree 705, . . . , or sub tree 730) that is to be used to
process the multicast traffic that corresponds to the unique value
associated with the flow.
[0075] In another example implementation, load balancing processor
310 may identify the flow, based on the flow information and may
select the sub tree that corresponds to the identified flow in a
manner that does not include the unique value.
[0076] Sub trees 705-730 may permit multicast traffic to be
processed by all or a portion of I/O units 220 within node 110 in a
manner that controls and/or manages utilization of bandwidth and/or
processing resources, associated with node 110, among I/O units
220. Sub trees 705-730 may correspond to sub tree sets 610 and
614-622 (e.g., of FIG. 6A), respectively. For example, sub tree 705
may include a root node (e.g., associated with I/O unit 220-1;
shown as "1") that corresponds to root node 612 (FIG. 6A). Sub tree
705 may include leaf nodes (e.g., associated with I/O units 220-2
and 220-3; shown as "2" and "3") that correspond to leaf nodes 613
(FIG. 6A). The leaf nodes may be logically interconnected with the
root node (e.g., as shown by the arrows in sub tree 705). Sub trees
710-730 may each include a root node that are logically
interconnected with leaf nodes in a manner similar to that
described with respect to sub tree 705 and/or sub tree set 610.
[0077] In one example, I/O unit 220 may generate the unique value
(e.g., using the hash function) and/or may identify a particular
flow (e.g., flow (A)) based on the flow information obtained from
the multicast traffic. Based on the unique value and/or the
identified flow, load balancing processor 310 may retrieve, from a
sub tree data structure (e.g., data structure 600) stored in a
memory associated with node 110 and/or I/O unit 220, information
associated with a sub tree that corresponds to the unique value
and/or the identified flow (e.g., sub tree 705). From the
information associated with the sub tree, load balancing processor
310 may forward, via switch fabric 230, the incoming multicast
traffic (e.g., associated with flow (A)) to another I/O unit 220
(e.g., I/O unit 220-1) that corresponds to a root node of the sub
tree. It should be appreciated that the forwarding of the incoming
multicast traffic is generally performed without replicating or
generating a copy of the multicast traffic, which preserves
bandwidth and/or processing resources associated with node 110.
[0078] The other I/O unit 220 may receive the multicast traffic and
may replicate the multicast traffic in order to send, via switch
fabric 230, a copy of the multicast traffic to a pair of I/O units
220 (e.g., I/O unit 220-2 and/or I/O unit 220-3) that correspond to
the logically interconnected leaf nodes within the sub tree (e.g.,
sub tree 705). Based on a quantity of nodes 110 and/or hosts 115
included in a membership group associated with the identified flow,
the I/O units 220 associated with the sub tree may replicate the
multicast traffic. For example, if sixty (60) nodes 110 and/or
hosts 115 are included in the membership group, then each I/O unit
220, associated with the sub tree, may replicate the multicast
traffic in order to generate a portion of the sixty copies of
multicast traffic (e.g., 20 copies). In another example,
implementation, the root node and/or leaf nodes may generate
unequal portions of the quantity of copies to be generated. For
example, I/O unit 220-1 (e.g., the root node) may generate 15
copies, I/O unit 220-2 may generate 25 copies, and/or I/O unit
220-3 may generate 20 copies.
[0079] I/O units 220 associated with sub tree 705 may send the
generated copies of multicast traffic to a nodes 110 and/or hosts
115 included in the group membership. For example, I/O unit 220-1
may send all or a portion of the generated copies to nodes 110
and/or hosts 115 with which I/O unit 220-1 is interconnected via
one or more I/O ports 300. Alternatively, or additionally, each I/O
unit 220, associated with sub tree 705, may send the generated
copies of the multicast traffic to other I/O units 220 to be sent
to nodes 110 and/or hosts 115.
[0080] In another example, I/O unit 220 may receive other multicast
traffic and use another sub tree (e.g., sub tree 710, . . . , or
730) to process the multicast traffic. In one example, load
balancing processor 310 may determine that the traffic is
associated with flow (B) and may send the traffic to another I/O
unit 220 (e.g., I/O unit 220-2) that corresponds to the root node
associated with sub tree 710. In another example, load balancing
processor 310 may determine that the traffic is associated with
flow (C) and may send the traffic to a further I/O unit 220 (e.g.,
I/O unit 220-3) that corresponds to the root node associated with
sub tree 715. Load balancing processor 310 may continue to receive
multicast traffic and may, in a manner similar to that described
above, process the multicast traffic using a sub tree that
corresponds to the identified flow (and/or a generated unique value
associated with the flow). Over a period of time all or a portion
of the flows associated with the multicast traffic may be processed
based on all or a portion of the sub trees, which enables bandwidth
resources and/or processing capacity of node 110 to be allocated
and/or distributed among I/O units 220 in a controlled and/or
managed fashion.
[0081] In the event that a particular I/O unit 220 malfunctions or
ceases to operate, one or more sub trees in which the particular
I/O unit 220 is included may be masked in a manner that enables
node 110 to continue to processes multicast traffic. For example, a
user of node 110 may cause sub trees 715, 720 and/or 725 to become
disabled and/or bypassed when I/O unit 220-3 ceases to operate
and/or malfunctions. The user may, for example, cause information
associated with the sub trees 715, 720, and/or 725 to be removed
from a sub tree data structure (e.g., data structure 600 of FIG.
6A), to be over written, and/or ignored. In this example, load
balancing processor 310 may process multicast traffic using other
sub trees 705, 710, and/or 730. Processing the multicast traffic
using the other sub trees provides additional processing diversity
and/or flexibility to node 110, which may reduce packet loss
associated with I/O unit 220 failure and/or malfunction.
[0082] FIG. 8 is a diagram illustrating example process 800 for
processing multicast traffic using a group of sub trees. In one
example implementation, process 800 may be performed by node 110.
In another example implementation, some or all of process 800 may
be performed by another device or group of devices including or
excluding node 110.
[0083] As shown in FIG. 8, process 800 may include receiving
multicast traffic (block 805), obtaining flow information
associated with received traffic (block 810), and generating a
unique flow identifier based on the flow information (block 815).
For example, node 110 may receive multicast traffic and a load
balancing application (e.g., hosted by a particular I/O unit 220
that received the traffic) may, in a manner similar to that
described above (e.g., with respect to block 505 of FIG. 5), obtain
information associated with a flow (e.g., information associated
with flow (D) of FIG. 7) from packets associated with the
traffic.
[0084] Load balancing processor 310 may generate a unique flow
identifier associated with the flow. For example, load balancing
processor 310 may use a hash function and/or some other
mathematical function to generate a unique identifier that
corresponds to the flow. The identifier may be generated, using the
hash and/or mathematical function, based on the information
associated with the flow obtained from the traffic.
[0085] As also shown in FIG. 8, process 800 may include retrieving
information associated with a selected sub tree (block 820). Load
balancing processor 310 may determine to which sub tree the unique
identifier corresponds. Based on the determination, load balancing
processor 310 may select a sub tree (e.g., sub tree 720 of FIG. 7,
which corresponds to flow (D)) to be used to process the traffic
and may retrieve information associated with the sub tree from a
sub tree data structure stored in a memory associated with node
110.
[0086] In another example implementation, load balancing processor
310 may identify the flow based on the information associated with
the flow, which may not include generating the unique identifier.
Load balancing processor 310 may, for example, use the identified
flow to select the sub tree with which to process the traffic.
[0087] As further shown in FIG. 8, process 800 may include sending
traffic to another I/O unit 220, via switch fabric 230, for
processing based on the selected sub tree (block 825). For example,
load balancing processor 310 may determine that the unique
identifier (or information associated with the flow, such as flow
(D)) corresponds to the selected sub tree (e.g., sub tree 720 of
FIG. 7) and may identify a particular I/O unit 220 (e.g., I/O unit
220-4) that corresponds to a root node within the selected sub
tree. Based on the identification of the root node, load balancing
processor 310 may send the multicast traffic, via switch fabric
230, to the particular I/O unit 220 to be processed. The particular
I/O unit 220 may receive the multicast traffic, may replicate the
multicast traffic in order to generate copies of the multicast
traffic, and may send a copy of the multicast traffic to other I/O
units 220 (e.g., I/O units 220-5 and 220-6) that correspond to leaf
nodes within the selected sub tree.
[0088] I/O units 220 (e.g., I/O unit 220-4, 220-5, and/or 220-6)
associated with the selected sub tree (e.g., sub tree 720) may
replicate the multicast traffic based on nodes 110 and/or hosts 115
included in a membership group associated with the identified flow.
Each I/O unit 220, associated with the selected sub tree, may
replicate the multicast traffic in order to generate copies of the
multicast traffic for a respective portion of the quantity of nodes
110 and/or hosts 115 included in the group membership.
[0089] I/O units 220 associated with the selected sub tree may send
the generated copies of multicast traffic to nodes 110 and/or hosts
115 included in the group membership. For example, I/O unit 220
associated with the root node (e.g., I/O unit 220-4) may send a
respective portion of the copies of multicast traffic to nodes 110
and/or hosts 115 with which I/O unit 220, associated with the root
node, is interconnected. I/O units 220 associated with the leaf
nodes (e.g., I/O unit 220-5 and/or I/O unit 220-6) may send other
respective portions of the copies of multicast traffic to nodes 110
and/or hosts 115 with which I/O units 220, associated with the leaf
nodes, are interconnected.
[0090] Alternatively, or additionally, each I/O unit 220,
associated with the selected sub tree, may send all or a portion of
the generated copies of the multicast traffic to other I/O units
220 to be sent to nodes 110 and/or hosts 115 included in the group
membership. For example, I/O unit 220-4 may send all or a portion
of the generated copies of multicast traffic to another I/O unit
220 (e.g., an I/O unit 220 that is not included in sub tree 720,
such as I/O unit 220-3) to be outputted to nodes 110 and/or hosts
115 with which the other I/O unit 220 is interconnected. I/O unit
220-3 may, in another example, send all or a portion of the
generated copies of multicast traffic to another I/O unit 220
(e.g., an I/O unit 220 that is not included in any of sub trees 705
through 730, such as I/O unit 220-M) to be outputted to nodes 110
and/or hosts 115 with which the other I/O unit 220 is
interconnected.
[0091] Although FIGS. 5 and 8 show example processes 500 and 800
for processing multicast traffic using a binary tree pair and a
group of sub trees, respectively, in another example
implementation, a process for processing multicast traffic may be
executed by I/O unit 220 using the binary tree pair and the group
of sub trees. For example, I/O unit 220 may receive multicast
traffic and may process a portion of flows associated with the
multicast traffic using one or more sub trees. Additionally, or
alternatively, I/O unit 220 may process another portion of the
flows associated with the multicast traffic using a binary tree
and/or an inverse binary tree.
[0092] The foregoing description provides illustration and
description, but is not intended to be exhaustive or to limit the
invention to the precise form disclosed. Modifications and
variations are possible in light of the above teachings or may be
acquired from practice of the invention.
[0093] While series of blocks have been described with regard to
FIGS. 5 and 8, the order of the blocks may be modified in other
implementations. Further, non-dependent blocks may be performed in
parallel.
[0094] It will be apparent that embodiments, as described herein,
may be implemented in many different forms of software, firmware,
and hardware in the embodiments illustrated in the figures. The
actual software code or specialized control hardware used to
implement embodiments described herein is not limiting of the
invention. Thus, the operation and behavior of the embodiments were
described without reference to the specific software code--it being
understood that software and control hardware may be designed to
implement the embodiments based on the description herein.
[0095] Further, certain portions, described above, may be
implemented as a component or logic that performs one or more
functions. A component or logic, as used herein, may include
hardware, such as a processor, ASIC, or FPGA, or a combination of
hardware and software (e.g., a processor executing software).
[0096] Even though particular combinations of features are recited
in the claims and/or disclosed in the specification, these
combinations are not intended to limit the disclosure of the
invention. In fact, many of these features may be combined in ways
not specifically recited in the claims and/or disclosed in the
specification. Although each dependent claim listed below may
directly depend on only one other claim, the disclosure of the
invention includes each dependent claim in combination with every
other claim in the claim set.
[0097] No element, act, or instruction used in the present
application should be construed as critical or essential unless
explicitly described as such. Also, as used herein, the article "a"
is intended to include one or more items. Where only one item is
intended, the term "one" or similar language is used. Further, the
phrase "based on" is intended to mean "based, at least in part, on"
unless explicitly stated otherwise.
* * * * *