U.S. patent application number 15/338917 was filed with the patent office on 2017-04-20 for system and method for communication.
The applicant listed for this patent is NEC Corporation. Invention is credited to Yuta ASHIDA, Toshio Koide.
Application Number | 20170111231 15/338917 |
Document ID | / |
Family ID | 49222277 |
Filed Date | 2017-04-20 |
United States Patent
Application |
20170111231 |
Kind Code |
A1 |
ASHIDA; Yuta ; et
al. |
April 20, 2017 |
SYSTEM AND METHOD FOR COMMUNICATION
Abstract
A communications system includes: a first node device provided
in a first network; a first controller controlling the first node
device; a second node device provided in a second network and
connected to the first node device; and a second controller
controlling the second node device. The first controller sets the
first node device with a processing rule according to which packets
transferred between the first and second controllers are processed.
The second controller sets the second node device with a processing
rule according to which the packets are processed. The first and
second controllers exchanges the packets each other through at
least the first and second node devices.
Inventors: |
ASHIDA; Yuta; (Tokyo,
JP) ; Koide; Toshio; (Tokyo, JP) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
NEC Corporation |
Tokyo |
|
JP |
|
|
Family ID: |
49222277 |
Appl. No.: |
15/338917 |
Filed: |
October 31, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14387319 |
Sep 23, 2014 |
9515868 |
|
|
PCT/JP2013/001913 |
Mar 21, 2013 |
|
|
|
15338917 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04L 69/324 20130101;
H04L 41/12 20130101; Y02D 30/30 20180101; H04L 45/00 20130101; H04L
41/04 20130101; Y02D 30/00 20180101 |
International
Class: |
H04L 12/24 20060101
H04L012/24; H04L 29/08 20060101 H04L029/08 |
Foreign Application Data
Date |
Code |
Application Number |
Mar 23, 2012 |
JP |
2012-068286 |
Claims
1-26. (canceled)
27. A control apparatus configured to control a plurality of
forwarding nodes in a first network, the control apparatus
comprising: a memory configured to store a topology data including
the forwarding nodes in the first network; and a processor
configured to execute program instructions to: receive, from one of
the forwarding nodes connected to an edge forwarding node of a
second network, a retrieval packet including an identifier which
represents a second control apparatus, wherein the second control
apparatus controls the edge forwarding node of the second network;
identify, based on the identifier, boundary information defining a
boundary between the first network and the second network; and
store the boundary information with the topology data.
28. The control apparatus according to claim 27, wherein the
processor is configured to execute program instructions to identify
the boundary information when the identifier does not represent the
control apparatus.
29. The control apparatus according to claim 27, wherein the
processor is configured to execute program instructions to:
identify, based on the boundary information and the topology data,
a forwarding instruction to forward a data packet; and send the
forwarding instruction to the forwarding node connected to an edge
forwarding node of a second network.
30. The control apparatus according to claim 29, wherein the
forwarding instruction includes a matching rule and an action for
processing the packet corresponding to the matching rule.
31. The control apparatus according to claim 27, wherein the
processor is configured to execute program instructions to send a
second retrieval packet including a second identifier which
represents the control apparatus.
32. The control apparatus according to claim 27, wherein the
retrieval packet is Link Layer Discovery Protocol packet.
33. A network system comprising: a plurality of the forwarding
nodes; and a control apparatus configured to control the forwarding
nodes in a first network, wherein the control apparatus comprises:
receive, from one of the forwarding nodes connected to an edge
forwarding node of a second network, a retrieval packet including
an identifier which represents a second control apparatus, wherein
the second control apparatus controls the edge forwarding node of
the second network; identify, based on the identifier, boundary
information defining a boundary between the first network and the
second network; and store the boundary information with the
topology data.
34. The network system according to claim 33, wherein the processor
is configured to execute program instructions to identify the
boundary information when the identifier does not represent the
control apparatus.
35. The network system according to claim 33, wherein the processor
is configured to execute program instructions to: identify, based
on the boundary information and the topology data, a forwarding
instruction to forward a data packet; and send the forwarding
instruction to the forwarding node connected to an edge forwarding
node of a second network.
36. The network system according to claim 35, wherein the
forwarding instruction includes a matching rule and an action for
processing the packet corresponding to the matching rule.
37. The network system according to claim 33, wherein the processor
is configured to execute program instructions to send a second
retrieval packet including a second identifier which represents the
control apparatus.
38. The network system according to claim 33, wherein the retrieval
packet is Link Layer Discovery Protocol packet.
39. A network control method for a network including a plurality of
forwarding nodes, the network control method comprising: receiving,
from one of the forwarding nodes connected to an edge forwarding
node of a second network, a retrieval packet including an
identifier which represents a second control apparatus, wherein the
second control apparatus controls the edge forwarding node of the
second network; identifying, based on the identifier, boundary
information defining a boundary between the first network and the
second network; and storing the boundary information with a
topology data.
40. The network control method according to claim 39, further
comprising of identifying the boundary information when the
identifier does not represent the control apparatus.
41. The network control method according to claim 39, further
comprising: identifying, based on the boundary information and the
topology data, a forwarding instruction to forward a data packet;
and sending the forwarding instruction to the forwarding node
connected to an edge forwarding node of a second network.
42. The network control method according to claim 41, wherein the
forwarding instruction includes a matching rule and an action for
processing the packet corresponding to the matching rule.
43. The network control method according to claim 39, further
comprising of sending a second retrieval packet including a second
identifier which represents the control apparatus.
44. The network control method according to claim 39, wherein the
retrieval packet is Link Layer Discovery Protocol packet.
Description
CROSS-REFERENCE TO RELATED PATENT APPLICATIONS
[0001] This application is a continuation of U.S. patent
application Ser. No. 14/387,319, filed Sep. 23, 2014, which is a
National Stage Entry of International Application No.
PCT/JP2013/001913, filed Mar. 21, 2013, which claims priority from
Japanese Patent Application No. 2012-068286, filed Mar. 23, 2012.
The entire contents of the above-referenced applications are
expressly incorporated herein by reference.
TECHNICAL FIELD
[0002] The present invention relates to a communications system and
more particularly relates to a communications system managed by a
centralized management, in which controllers control packet
transfers among network devices.
BACKGROUND ART
[0003] One problem of conventional network devices is that flexible
load control, such as load distribution and load concentration,
cannot be achieved by an external control. This makes it difficult
to monitor and improve the system behavior in a large-scale
network, causing a problem that a modification in the system design
and/or configuration requires a large cost.
[0004] One proposed approach for solving this problem is separation
of the packet transfer function and the route control function
which are both conventionally implemented by a network device. For
example, in a system in which the packet transfer function is
assigned to network devices and the control function is assigned to
a controller provided separately from the network devices, the
controller can perform centralized management of packet transfer,
which allows establishing a network with high flexibility.
[0005] (CD-Separated Network)
[0006] One proposed function-separated network is a CD-separated
network (where C stands for control plane and d stands for data
plane), in which a controller operating on the control plane
controls node devices operating on the data plane.
[0007] One example of the CD-separated network is an OpenFlow
network which is based on the OpenFlow technique, in which the
route control in the network is achieved by controlling switches by
a controller. Details of the OpenFlow technique are disclosed in
non-patent documents 1 and 2. Note that the OpenFlow network should
be construed as one example.
[0008] (OpenFlow Network)
[0009] In an OpenFlow network, a controller, such as an OpenFlow
controller (OFC), controls the behavior of node devices, such as
OpenFlow switches (OFSs), by processing route control information
(or a flow table) which describes the route control of the node
devices.
[0010] Controllers and node devices are connected via control
channels (communication channels for control) called "secure
channels", which are communication paths protected by using
dedicated lines or an SSL (Secure Socket Layer) technique.
Controllers and node devices exchange OpenFlow messages defined in
the OpenFlow protocol via control channels.
[0011] In the OpenFlow network, each node device is provided in the
OpenFlow network and controlled by the controller; each node device
may be an edge switch or a core switch. A series of packet
transfers from the receipt of a packet at an ingress edge switch to
the transmission at an egress edge switch in the OpenFlow network
is referred to as "flow". In the OpenFlow network, communications
are each regarded as an end-to-end flow, and the route control, the
trouble recovery, and the load distribution and optimization are
carried out in units of flows.
[0012] It should be noted that the frame should be regard as an
alternative of the packet. The difference between the packet and
the frame only lies in the difference in the protocol data unit
(PDU). The packet is the PDU of the TCP/IP (transmission control
protocol/internet protocol). On the other hand, the frame is the
PDU of the "Ethernet" (registered trademark).
[0013] The route control information (or the flow table) includes a
set of: identifying conditions (identifying rules) to identify
packets to be treated as a flow; statistical information that
indicates the number of times in which packets comply or match with
the identifying conditions (or the identifying rules); and
processing rules (or flow entries) which define a group of contents
of processing (or actions) to be performed on packets.
[0014] The identifying conditions (or the identifying rules) of
each processing rule (or each flow entry) are defined by various
combinations of any or all of data of respective protocol
hierarchies included in the header region (or the header field) of
the packet, and distinguishable one another by using these data.
One example of data of the respective protocol hierarchies may
include the destination address, the source address, the
destination port, the source port. Note that the above-described
addresses may be defined by a MAC (media access control) address or
an IP address. Also, data of the ingress port may be also used as
the identifying conditions (or the identifying rules) of the
processing rules (or the flow entries). Furthermore, the
identifying conditions (or the identifying rules) of the processing
rules (or the flow entries) may be set in the form of a
representation in which some or all of the values of the header
region of the packet to be treated as the flow are represented by
using a normal representation, a wildcard character "*" or the
like.
[0015] The contents of processing (or the action) of a processing
rule (or a flow entry) indicates an operations such as "output to a
particular port", "discarding" and "rewriting of header". For
example, if the contents of processing (or the action) of a
processing rule (or a flow entry) indicates identification
information of an output port (an output port number and the like),
the node device outputs the packet to the indicated port, and if
not, the node device discards the packet. Instead, if the contents
of processing (or the action) of the processing rule (or the flow
entry) indicates the header information, the node device rewrites
the header of the packet on the basis of the header
information.
[0016] A node device in the OpenFlow network performs the contents
of processing (or the action) of a processing rule (or a flow
entry) on a group of packets (or a series of packets) that complies
with the identifying conditions (or the identifying rules) of the
processing rule (or the flow entry).
[0017] For example, when receiving a packet, an OpenFlow switch
(OFS), which corresponds to a node device in the OpenFlow network,
retrieves the processing rule (or the flow entry) associated with
the identifying conditions (or the identifying rule) complying with
the header information of the received packet from the route
control information (of the flow table). If the processing rule
(flow entry) complying with the receipt packet is found out as a
result of the retrieval, the contents of processing (or the action)
described in the action field of the processing rule (or the flow
entry) is performed on the received packet. If no processing rule
(or the flow entry) complying with the received packet is found as
a result of the retrieval, on the other hand, the received packet
is judged as the first packet. In this case, inquiry information of
the received packet is transmitted to an OpenFlow controller (OFC),
which corresponds to the controller in the OpenFlow network, via a
control channel to request for determining the route of the packet
based on the source and destination of the received packet; this is
followed by receiving a processing rule (or a flow entry) for
attaining the packet transfer along the determined route and then
updating the route control information (or the flow table).
[0018] It should be noted that an initial state processing rule (or
a default entry) is registered in the route control information (or
the flow table), where the initial state processing rule describes
identifying conditions (or identifying rules) of a low priority
which are defined so as to comply with the header information of
any packets. If no other processing rule (or flow entry) complying
with the received packet is found, the received packet complies
with the initial state processing rule (or the default entry). The
contents of processing (or the action) of the initial state
processing rule (or the default entry) is defined to instruct to
transmit inquiry information of the received packet to the OpenFlow
controller (OFC).
[0019] As mentioned above, the OpenFlow switch (OFS) determines
processing to be done on a packet in accordance with the setting of
processing rules (or flow entries) set by the OpenFlow controller
(OFC). In particular, the "output" processing, in which a packet is
outputted to a specified interface, is often used as the
processing. It should be noted that the specified interface is not
limited to a physical interface; the specified interface is not
limited to the physical port and may be a virtual port.
[0020] As thus described, control of packets is attained by
centralized control of OpenFlow switches (OFS) by an OpenFlow
controller (OFC) in the OpenFlow network. One issue is that one
OpenFlow controller (OFC) can control only a limited number of
OpenFlow switches (OFS). Accordingly, an increase in the scale of
the network, which causes an increase in the number of the OpenFlow
switches (OFS), may result in that calculation of processing rules
(flow entries) in the OpenFlow controller (OFC) and the like
becomes a bottleneck of the network quality.
[0021] One approach to address this may be connecting a plurality
of OpenFlow networks, each including one Openflow controller (OFC)
and a plurality of OpenFlow switches (OFS) controlled by the
OpenFlow controller, when the scale of the network is
increased.
CITATION LIST
Non Patent Literature
[0022] [NPL 1] Nick Mckeown and Other Seven Persons, "OpenFlow:
Enabling Innovation in Campus Networks", online, Retrieval on Jan.
23, 2012, Internet (URL: http://www. openflow.
Org/documents/openflow-wp-latest.pdf)
[0023] [NPL 2] "OpenFlow Switch Specification, Version 1.1.0
Implemented", online, Retrieval on Feb. 28, 2012, Internet
(URL:http//ww.openflowswitch.
org/documents/openflow-spec-v1.1.0.pdf)
SUMMARY OF INVENTION
[0024] An interconnection of a plurality of networks each managed
by a centralized management, such as OpenFlow networks, requires
exchanging and sharing of information related to route control
among the controllers to control the traffic over the networks. One
approach to achieve this may be using an existing routing protocol,
such as the OSPF (open shortest path first) protocol and the BGP
(border gateway protocol) or other data sharing protocols in order
to exchange and share information related to the route control
among the controllers.
[0025] The use of these protocols to exchange information related
to the route control among the controllers, however, requires
establishing a connection between adjacent controllers.
[0026] In an aspect of the present invention, a communications
system includes: a first node device provided in a first network; a
first controller controlling the first node device; a second node
device provided in a second network and connected to the first node
device; and a second controller controlling the second node device.
The first controller sets the first node device with a processing
rule according to which packets transferred between the first and
second controllers are processed. The second controller sets the
second node device with a processing rule according to which the
packets are processed. The first and second controllers exchanges
the packets each other through at least the first and second node
devices.
[0027] In another aspect of the present invention, a communication
method includes:
[0028] setting a first node device provided in a first network by a
first controller controlling the first node device with a
processing rule according to which packets transferred between the
first controller and a second controller controlling a second node
provided in a second network are processed;
[0029] setting the second node device by the second controller with
a processing rule according to which the packets are processed;
[0030] establishing a connection between the first and second node
devices; and
[0031] exchanging the packets between the first and second
controllers through at least the first and second node devices.
[0032] In another aspect of the present invention, a program is
provided for causing a computer or a network device to perform the
operations of respective devices in the above-described
communication method. The program may be stored in a storage device
or a non-transitory recording medium.
[0033] The present invention enables establishing a communication
connection between controllers of adjacent networks.
BRIEF DESCRIPTION OF DRAWINGS
[0034] FIG. 1 shows an exemplary configuration of a communications
system according to the present invention.
[0035] FIG. 2 shows an exemplary system configuration in a first
embodiment.
[0036] FIG. 3 shows an exemplary configuration of a controller.
[0037] FIG. 4 shows an exemplary configuration example of a node
device.
[0038] FIG. 5 shows an exemplary configuration of an LLDP frame in
the first embodiment.
[0039] FIG. 6A shows exemplary contents of a processing rule
defined for a first network.
[0040] FIG. 6B shows exemplary contents of a processing rule
defined for a second network.
[0041] FIG. 7 shows an exemplary configuration of an LLDP frame in
a second embodiment.
[0042] FIG. 8 shows an exemplary configuration of a controller in a
third embodiment.
[0043] FIG. 9A shows exemplary contents of a sorting rule defined
for the controller of the first network.
[0044] FIG. 9B shows exemplary contents of a sorting rule defined
for the controller of the second network.
[0045] FIG. 10A shows an exemplary configuration of the first
network of the communications system in a fourth embodiment.
[0046] FIG. 10B shows an exemplary configuration of the second
network of the communications system in the fourth embodiment.
[0047] FIG. 11A shows exemplary contents of a sorting rule defined
for the controller of the first network.
[0048] FIG. 11B shows exemplary contents of a sorting rule defined
for the controller of the second network.
[0049] FIG. 12 shows an exemplary hardware configuration of a
controller according to the present invention.
[0050] FIG. 13 shows an exemplary hardware configuration of a node
device according to the present invention.
DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0051] Embodiments of the present invention are described in the
following with an example in which OpenFlow networks, which are a
sort of networks managed by a centralized management, are used. It
should be noted, however, that the present invention is not limited
to a communication system in which OpenFlow networks are used.
Embodiment
[0052] Embodiments of the present invention are described below in
details with reference to the attached drawings.
System Configuration
[0053] A description is first given of an exemplary configuration
of a communications system in one embodiment of the present
invention with reference to FIG. 1.
[0054] In one embodiment, a communications system includes
controllers 10 and node devices 20.
[0055] The controllers 10 are information processing apparatuses
which control the node devices 20.
[0056] The node devices 20 are communication devices provided in
networks.
[0057] The controllers 10 and the node devices 20 are connected
through control channels. The controllers 10 and the node devices
20 communicate with each other by using OpenFlow messages defined
in an OpenFlow protocol via the control channels.
[0058] Each node device 20 is connected to the adjacent node device
20 via a data communication link such as a LAN (local area network)
and the like. Moreover, a node device 20 which operates as an edge
switch is adapted to have a connection with a host (a client, a
server or the like) or an external network device which does not
belong to the network in which the node device 20 is provided.
[0059] It should be noted that the controllers 10 and the node
devices 20 are not limited to physical machines; the controllers 10
and the node devices 20 may be a virtual machine (VM).
[0060] FIG. 1 shows controllers 10-1 and 10-2, as examples of the
controllers 10. FIG. 1 also shows node devices 20-1 to 20-6 as
examples of the node devices 20.
[0061] The controller 10-1 is connected via a control channel to
each of the node devices 20-1 to 20-3.
[0062] The controller 10-2 is connected via a control channel to
each of the node devices 20-4 to 20-6.
[0063] The node devices 20-1 to 20-6 are each connected to one or
more adjacent node devices via one or more data communication
links, such as LANs.
[0064] In this embodiment, the node device 20-1 is connected to the
node device 20-2 via a data communication link. The node device
20-2 is connected to the node device 20-3 via a data communication
link. The node device 20-3 is connected to the node device 20-4 via
a data communication link. The node device 20-4 is connected to the
node device 20-5 via a data communication link. The node device
20-5 is connected to the node device 20-6 via a data communication
link.
[0065] The node devices 20-1 to 20-3 are provided in a network 1.
That is, each of the node devices 20-1 to 20-3 is arranged in the
network 1.
[0066] The node devices 20-4 to 20-6 are arranged in the network 2.
That is, each of the node devices 20-4 to 20-6 is arranged in the
network 2.
[0067] Accordingly, that the networks 1 and 2 are connected to each
other via the data communication link connecting the node device
20-3 and the node device 20-4.
[0068] It should be noted that the control channels and the data
communication links may be a wired communication link or a wireless
communication link.
[0069] (Identification Information of Controllers and Node
Devices)
[0070] In this embodiment, the controller 10-1 is assigned with a
controller ID "CPID1" as its own identification information. The
controller 10-2 is assigned with a controller ID "CPID2" as its own
identification information. The node device 20-1 is assigned with a
node device ID "DPID1" as its own identification information. The
node device 20-2 is assigned with a node device ID "DPID2" as its
own identification information. The node device 20-3 is assigned
with a node device ID "DPID3" as its own identification
information. The node device 20-4 is assigned with a node device ID
"DPID4" as its own identification information. The node device 20-5
is assigned with a node device ID "DPID5" as its own identification
information. The node device 20-6 is assigned with a node device ID
"DPID6" as its own identification information.
First Embodiment
[0071] A first embodiment of the present invention is described
below.
[0072] In this embodiment, the controllers 10 are each connected to
one of the node devices 20 via a data communication link, similarly
to the connections between the node devices 20. That is, the
controllers 10 may also operate as a "host" connected to any of the
node devices 20.
[0073] (System Configuration in First Embodiment)
[0074] An exemplary configuration of a communications system in
this embodiment is described below with reference to FIG. 2.
[0075] In this embodiment, the controller 10-1 is further connected
to the node device 20-1 through a data communication link. It
should be noted that the controller 10-1 may be logically connected
to the node device 20-1 by tunneling or the like in an actual
implementation.
[0076] Similarly, the controller 10-2 is further connected to the
node device 20-6 via a data communication link. It should be noted
that the controller 10-2 may be logically connected to the node
device 20-2 by tunneling or the like in an actual
implementation.
[0077] It should be noted that the system configuration is not
limited to the above-described embodiment.
[0078] (Configuration of Controllers)
[0079] An exemplary configuration of each controller 10 is
described with reference to FIG. 3. Note that this exemplary
configuration is common to both of the controllers 10-1 and
10-2.
[0080] Each controller 10 includes a node device control section 11
and an interface management section 15.
[0081] The node device control section 11 controls node devices 20
via control channels. For example, the node device control section
11 executes a software program which allows operating as an
OpenFlow controller (OFC) in an OpenFlow network. Here, the node
device control section 11 monitors and manages interface units
contained in each of the node devices 20 via the control channels
and specifies processing rules (flow entries) of packets
transmitted and received by the interface units for the node
devices 20. One example of the contents of a processing rule (a
flow entry) of the packets may include an instruction of outputting
to an interface unit or a node device control section 11 specified
on the basis of the feature of each packet received by the
interface unit, an instruction of outputting to an interface unit
specified when the packet prepared by the node device control
section 11 is received, and the like.
[0082] The interface management section 15 manages a network
interface provided in the controller 10. In this embodiment, the
interface management section 15 is adapted to have a connection to
a network interface within any of the node devices 20 via data a
communication link connected to the network interface within the
controller 10.
[0083] In this embodiment, the node device control section 11 is
connected to the interface management section 15. Thus, the node
device control section 11 can communicate with any of the node
devices 20 via the interface management section 15.
[0084] (Configuration of Node device Control Unit)
[0085] An exemplary configuration of the node device control
section 11 is described below.
[0086] The node device control section 11 includes a topology
management section 111, a control message processing section 112, a
node communication section 113, an adjacency discovery section 114,
an identifying condition calculation section 115, a route
calculation section 116, a processing rule calculation section 117,
a processing rule management section 118 and a processing rule
storage section 119.
[0087] The topology management section 111 manages topology
information and boundary information of its own network. The
topology management section 111 is connected to the interface
management section 15.
[0088] The control message processing section 112 prepares control
messages in accordance with controls to be performed for the node
devices 20. The control message processing section 112 also
analyzes control messages received from the node device 20.
[0089] The node communication section 113 is connected to node
devices 20 via control channels to communicate with the node
devices 20. The node communication section 113 transmits control
messages to the node devices 20 in response to control message
transmission requests from the control message processing section
112. The node communication section 113 also transfers control
messages received from the node devices 20 to the control message
processing section 112.
[0090] The adjacency discovery section 114 discovers a node device
20 located on the boundary with a different network and the
controller 10 in the different network.
[0091] The identifying condition calculation section 115 calculates
identifying conditions (or the identifying rules) of packets. In
this embodiment, the identifying condition calculation section 115
calculates identifying conditions (or identifying rules) of packets
transferred between the controllers 10 on the basis of the topology
information and the boundary information of its own network. The
identifying condition calculation section 115 is also adapted to,
when receiving inquiry information of the first packet from a node
device 20, calculate identifying conditions (or an identifying
rule) of the packet on the basis of the header information of the
packet.
[0092] The route calculation section 116 calculates transfer routes
of packets. Here, the route calculation section 116 determines the
end points of the route to be used in the packet transfer between
the controllers 10 on the basis of the topology information and the
boundary information of the networks, and calculates the route to
connect the determined end points.
[0093] The processing rule calculation section 117 calculates
processing rules (or flow entries) on the basis of the identifying
conditions calculated by the identifying condition calculation
section 115, and the routes calculated by the route calculation
section 116.
[0094] The processing rule management section 118 manages the
processing rules (or the flow entries). The processing rule
management section 118 registers information related to the
processing rules (or the flow entries) in the processing rule
storage section 119, correlating information related to the
processing rules with identification information (processing rule
IDs) of the processing rules (or the flow entries). The processing
rule management section 118 also requests the control message
processing section 112 to set the node devices 20 with processing
rules (or flow entries).
[0095] The processing rule storage section 119 stores: the
information with regard to the processing rules (or the flow
entries) set to the respective node devices 20 under the management
of the controller 10; and the identification information (the
processing rules ID) of the processing rules (or the flow entries).
In an actual implementation, the processing rule storage section
119 may store copies or master tables of the route control
information (or the flow table) of the respective node devices
20.
[0096] (Configuration of Node Devices)
[0097] An exemplary configuration of each node device 20 is
described with reference to FIG. 4. Note that this exemplary
configuration is common to all of the node devices 20-1 to
20-6.
[0098] Each node device 20 includes a communication unit 21 and one
or more interface units 22 (one shown).
[0099] The communication unit 21 is connected to a controller 10
via a control channel to exchange control messages. In one example,
the communication unit 21 executes a software program which allows
operating as an OpenFlow switch (OFS) in an OpenFlow network. The
communication unit 21 also processes a packet inputted from a
interface unit 22 in accordance with processing rules (or flow
entries) and processing commands (output instructions or the like),
which are specified by the node device control section 11 in the
controller 10. It should be noted that, when neither processing
rule (flow entry) nor processing command (output instructions or
the like) is specified for an inputted packet, the communication
unit 21 judges the packet as the first packet and transmits inquiry
information of this packet to the controller 10 through the control
channel.
[0100] The interface unit 22 is a network interface provided within
the node device 20. The interface unit 22 may be a network
interface having physical ports or a network interface having
virtual ports. The interface unit 22 is used for establishing a
connection to a connection destination, such as an adjacent node
device and a host and adapted to exchange packets through a data
communication link. The interface unit 22 may be used for establish
a connection with a controller 10. When receiving a packet from a
connection destination, such as an adjacent node device and a host,
the interface unit 22 outputs the packet to the communication unit
21.
[0101] (Topology Retrieving Process in First Embodiment)
[0102] An exemplary topology retrieving process in the first
embodiment is described below.
[0103] Each of the controllers 10-1 and 10-2 prepares topology
retrieval packets in order to retrieve the topology of its own
network. In one embodiment, LLDP (Link Layer Discovery Protocol)
frames may be used as topology retrieval packets. It should be
noted that topology retrieval packets are not limited to LLDP
frames and packets other than LLDP frames may be used as topology
retrieval packets in an actual implementation.
[0104] In this embodiment, LLDP frames are used as exemplary
topology retrieval packets.
[0105] (LLDP Frame Format in First Embodiment)
[0106] An exemplary format of LLDP frames in this embodiment is
described with reference to FIG. 5.
[0107] In this embodiment, each LLDP frame includes an "LLDP
header" region (or field), and an "optional TLVs" region (or
field). Note that "TLV" stands for type, length and value.
[0108] The "LLDP header" region includes a "source MAC address"
region, a "destination MAC address" region and an "ether type"
region, which is denoted by the legend "eth_type".
[0109] The "source MAC address" region contains the source MAC
address of the LLDP frame.
[0110] The "destination MAC address" contains the destination
source MAC address of the LLDP frame.
[0111] The "ether type" region contains information that indicates
the ether type of the LLDP frame. Usually, "0x88cc" is specified as
the ether type of the LLDP frame.
[0112] The "optional TLVs" region includes an "identification
information" region (or a "controller ID" region).
[0113] The "identification information" region contains the
identification information (or the controller ID) of the controller
that prepares the LLDP header.
[0114] Various types of identification information (or the
controller ID) may be used if they are unique in the networks. For
example, a MAC address of the interface of the controller 10, an IP
address, a VLAN tag information (VLAN ID) and the like may be used
as the identification information. It should be noted that
identification information actually-used is not limited to these
example.
[0115] (Procedure of Topology Retrieval in First Embodiment)
[0116] Next, an exemplary procedure of the topology retrieval in
the first embodiment is described below.
[0117] (1) Operation for Transmitting LLDP Frame
[0118] First, a description is given of an exemplary operation in
which LLDP frames are transmitted from the controllers 10 to the
node devices 20.
[0119] The topology management section 111 in each of the
controllers 10-1 and 10-2 prepares LLDP frames as topology
retrieval packets.
[0120] At this time, the topology management unit 111 in each of
the controllers 10-1 and 10-2 incorporates its own identification
information (or controller ID) into each LLDP frame to identify the
controller; the identification information (or the controller ID)
in each LLDP frame is defined to be unique among the networks.
[0121] In this embodiment, the topology management section 111 in
each of the controllers 10-1 and 10-2 incorporates identification
information (or the controller ID) of the controller 10-1 or 10-2
into the "identification information" region of the "optional TLVs"
region within each LLDP frame as shown in FIG. 5.
[0122] The topology management section 111 in each of the
controllers 10-1 and 10-2 requests the control message processing
section 112 to instruct to transmit the LLDP frames to the node
devices 20. In an actual implementation, the topology management
section 111 may only output the LLDP frames to the control message
processing section 112.
[0123] The control message processing section 112 in each of the
controllers 10-1 and 10-2 prepares packet-out messages each
incorporating an LLDP frame and a transmission instruction thereof
in response to the request of the transmission instruction of the
LLDP frames to the node devices 20, and requests the node
communication section 113 to transmit the packet-out messages to
all of the node devices 20 provided in its own network. The
packet-out message is a sort of control messages. The packet-out
messages include transmission instructions which instruct to
transmit the LLDP frame from all the interface units 22 in each
node devices 20.
[0124] The node communication section 113 in each of the
controllers 10-1 and 10-2 transmits the packet-out messages from
the control message processing section 112 to all of the node
devices 20 provided in its own network through the control
channels.
[0125] The communication unit 21 in each node device 20 transmits
the LLDP frame to the connection destinations from all the
interface units 22 in its own node device 20, in response to the
above-described packet-out message. That is, the communication unit
21 in each node device 20 transmits the LLDP frame in the form of
broadcasting. Also, the communication unit 21 in each node device
20 receives the LLDP frames from the connection destinations via
the interface units 22.
[0126] (2) Operation for Transmitting Inquiry Information of LLDP
Frame Via Control Channel
[0127] In the following, a description is given of an exemplary
operation of transmitting inquiry information of an LLDP frame from
each node device 20 to the controller 10 via the control
channel.
[0128] When receiving an LLDP frame from a connection destination
via an interface unit 22, the communication unit 21 in each node
device 20 transmits a packet-in message as inquiry information to
the controller 10 that controls the node device 20, through the
control channel. The packet-in message is a sort of control
messages.
[0129] The packet-in message includes an "LLDP frame" region, a
"node device ID" region an "interface ID" region.
[0130] The "LLDP frame" region contains the received LLDP
frame.
[0131] The "node device ID" region contains the identification
information (or the node device ID) of the node device 20.
[0132] The "interface ID" region contains identification
information (or the interface ID) of the interface unit 22 that
receives the LLDP frame.
[0133] Usually, the communication unit 21 in each node device 20
receives LLDP frames containing identification information (the
controller ID) of the controller 10 of the same network from the
node device 20 of the same network, and transmits packet-in
messages respectively incorporating the LLDP frames as the inquiry
information to the controller 10.
[0134] When the source is the node device 20-3 and the destination
is the node device 20-4, however, an LLDP frame containing
identification information (or the controller ID) of the controller
10-1 is transmitted to the controller 10-2. That is, when receiving
the LLDP frame having the identification information (controller
ID) of the controller 10-1 from the node device 20-3, the
communication unit 21 in the node device 20-4 transmits a packet-in
message containing the LLDP frame as the inquiry information to the
controller 10-2.
[0135] When the source is the node device 20-4 and the destination
is the node device 20-3, on the other hand, the LLDP frame which
has the identification information (or the controller ID) of the
controller 10-2 is transmitted to the controller 10-1. That is,
when receiving the LLDP frame having the identification information
(or the controller ID) of the controller 10-2 from the node device
20-4, the communication unit 21 in the node device 20-3 transmits a
packet-in message incorporating the LLDP frame as the inquiry
information to the controller 10-1.
[0136] The node communication section 113 in each of the
controllers 10-1 and 10-2 receives the above packet-in messages
from the respective node devices 20 connected thereto via the
control channels and outputs the packet-in messages to the control
message processing section 112.
[0137] The control message processing section 112 in each of the
controllers 10-1 and 10-2 analyzes the above packet-in messages and
extracts the LLDP frames incorporated in the packet-in messages.
The control message processing section 112 outputs the extracted
LLDP frames to the adjacency discovery section 114.
[0138] The adjacency discovery section 114 in each of the
controllers 10-1 and 10-2 determines whether the identification
information (or the controller ID) described in each LLDP frame is
identical to the identification information (controller ID) of its
own controller 10.
[0139] In this embodiment, the adjacency discovery section 114 in
each of the controllers 10-1 and 10-2 refers to the "identification
information" region of the "optional TLVs" region of each LLDP
frame (see FIG. 5) and determines whether the identification
information (or the controller ID) incorporated in the
"identification information" region is identical to the
identification information (or the controller ID) of its own
controller 10.
[0140] If the identification information (controller ID)
incorporated in an LLDP frame is identical to the identification
information (or the controller ID) of its own controller 10, the
adjacency discovery section 114 in each of the controllers 10-1 and
10-2 can dynamically detect and monitor the node device 20 adjacent
to another node device only in its own network and the interface
units 22 of the node device 20.
[0141] If the identification information (controller ID) in an LLDP
frame is different from the identification information (or the
controller ID) of its own controller 10, the adjacency discovery
section 114 in each of the controllers 10-1 and 10-2 can detect the
existence of a controller 10 in a different network. Moreover, the
adjacency discovery section 114 can dynamically detect and monitor
a node device 20 at the boundary with the different network and the
interface units 22 of the node device 20.
[0142] (3) Operation for Transmitting LLDP Frame Via Data
Communication Link
[0143] An exemplary operation for transmitting an LLDP frame from
each node device 20 to the controller 10 via a data communication
link is described below.
[0144] A communication unit 21 in a node device 20 connected to the
controller 10 via a data communication link, for example, the node
device 20-1, transmits the LLDP frame also from the interface unit
22 connected to the controller 10. For example, the communication
unit 21 in the node device 20-1 transmits the LLDP frame to the
controller 10-1 from the interface unit 22 connected to the
controller 10-1 via the data communication link. Similarly, the
node device 20-6 transmits the LLDP frame to the controller 10-2
from the interface unit 22 connected to the controller 10-2 via the
data communication link.
[0145] The interface management section 15 in each of the
controllers 10-1 and 10-2 receives the LLDP frame from each node
device 20 of the connection destination, via the data communication
link, and outputs the LLDP frame to the adjacency discovery section
114.
[0146] The adjacency discovery section 114 in each of the
controllers 10-1 and 10-2 determines whether the identification
information (or the controller ID) described in the LLDP frame is
identical to the identification information (controller ID) of its
own controller 10.
[0147] If the identification information (controller ID)
incorporated in the LLDP frame is identical to the identification
information (or the controller ID) of the controller 10, the
adjacency discovery section 114 in each of the controllers 10-1 and
10-2 can dynamically detect and monitor the node device 20
connected to the controller 10 and the interface units thereof.
[0148] (4) Operation for Preparing Topology Information
[0149] An exemplary operation for preparing topology information in
the controller 10 is described below.
[0150] The adjacency discovery section 114 in each of the
controllers 10-1 and 10-2 sends information detected or obtained by
the above-described operations, to the topology management section
111.
[0151] The topology management section 111 in each of the
controllers 10-1 and 10-2 prepares and stores topology information
and boundary information of the corresponding network on the basis
of the information received from the adjacency discovery section
114.
[0152] (Note)
[0153] The detecting method of the node devices 20 connected to the
controller 10 and the interface units 22 thereof is not limited to
this method in an actual implementation. For example, information
related to each node device 20 connected to the controller 10 and
the interface units 22 thereof may be notified and set in advance
by a manual operation by an operator or the like.
[0154] (Operation for Mutual Notification of Judging Conditions of
Packets Between Controllers in First Embodiment)
[0155] An exemplary operation for mutual notification of
identifying conditions (or identifying rules) of packets between
the controllers 10 in the first embodiment is described below.
[0156] The identifying condition calculation section 115 in each of
the controllers 10-1 and 10-2 calculates identifying conditions (or
identifying rules) of packets used to communicate with the
controller 10 in a different network on the basis of the topology
information and the boundary information of its own network, which
are stored in the topology management section 111, and prepares a
notification packet that stores the identifying conditions (or the
identifying rules). Moreover, the identifying condition calculation
section 115 requests and instructs the control message processing
section 112 to transmit the notification packet to a node device 20
at the boundary with a different network. The identifying
conditions (or the identifying rules) are attached to the packet
header, which may include the MAC address, the IP address or the
protocol type. It should be noted that in an actual implementation,
the identifying condition calculation section 115 may only output
the notification packet to the control message processing section
112.
[0157] The control message processing section 112 in each of the
controllers 10-1 and 10-2 prepares a packet-out message
incorporating the notification packet and an instruction to output
the notification packet to a node device 20 in a different network
in response to a request for instructing transmission of the
notification packet to a node device 20 at the boundary with the
different network. Moreover, the control message processing section
112 requests the node communication section 113 to transmit the
packet-out message to the node device 20 at the boundary with the
different network.
[0158] The node communication section 113 in each of the
controllers 10-1 and 10-2 transmits the above packet-out message to
the node device 20 at the boundary with the different network via
the control channel, in response to the transmission request of the
packet-out message from the control message processing section
112.
[0159] When receiving the above-described packet-out message via
the control channel, the communication unit 21 in the node device
20 at the boundary with the different network transmits the
notification packet to the node device 20 in the different network
through the data communication link from the interface unit 22
connected to the node device 20 in the different network, on the
basis of the instruction of outputting the notification packet to
the node device 20 in the different network.
[0160] When a notification packet is inputted from the interface
unit 22 connected to the node device 20 in the different network,
the communication unit 21 in the node device 20 at the boundary
with the different network transmits a packet-in message
incorporating the notification packet as inquiry information to the
controller 10 that controls its own node device 20.
[0161] The node communication section 113 in each of the
controllers 10-1 and 10-2 receives the above-described packet-in
message from each node device 20 connected thereto via the control
channel and outputs the packet-in message to the control message
processing section 112.
[0162] The control message processing section 112 in each of the
controllers 10-1 and 10-2 analyzes the above-described packet-in
message and extracts the notification packet incorporated in the
packet-in message. The control message processing section 112 then
outputs the notification packet to the adjacency discovery section
114.
[0163] The identifying condition calculation section 115 in each of
the controllers 10-1 and 10-2 recognizes the identifying conditions
(or the identifying rules) of packets used to communicate with the
controller 10 in the different network, on the basis of the
notification packet.
[0164] As a result, the controllers 10-1 and 10-2 can recognize the
identifying conditions (or the identifying rules) of packets used
in the mutual communications.
[0165] (1) Notification to Controller 10-2 from Controller 10-1
[0166] An exemplary operation of notifying identifying conditions
(or identifying rules) of packets from the controller 10-1 to the
controller 10-2 is described.
[0167] The controller 10-1 prepares a notification packet
incorporating identifying conditions (or identifying rules) of
packets used to communicate with the controller 10-2 and transmits
a packet-out message incorporating the notification packet and an
instruction of outputting the notification packet to the node
device 20-4 to the node device 20-3 at the boundary with the
different network via the control channel.
[0168] When receiving the packet-out message from the controller
10-1, the node device 20-3 transmits the notification packet from
the interface unit 22 connected to the node device 20-4 in
accordance with the instruction of outputting the notification
packet to the node device 20-4.
[0169] When receiving the notification packet from the node device
20-3, the node device 20-4 transmits a packet-in message
incorporating the notification packet as inquiry information to the
controller 10-2, which controls the node device 20-4, via the
control channel.
[0170] The controller 10-2 receives the packet-in message from the
node device 20-3 and obtains the notification packet incorporated
in the packet-in message. This allows the controller 10-2 to
recognize the identifying conditions (or the identifying rules) of
packets used to communicate with the controller 10-1.
[0171] (2) Notification to Controller 10-1 from Controller 10-2
[0172] An exemplary operation of notification of identifying
conditions (or identifying rules) of packets from the controller
10-2 to the controller 10-1 is described.
[0173] The controller 10-2 prepares a notification packet
incorporating identifying conditions (or identifying rules) of
packets used to communicate with the controller 10-1, and transmits
a packet-out message incorporating the notification packet and an
instruction of outputting the notification packet to the node
device 20-3, to the node device 20-4 at the boundary with the
different network via the control channel.
[0174] When receiving the packet-out message from the controller
10-2, the node device 20-4 transmits the notification packet from
the interface 22 connected to the node device 20-3 in accordance
with the instruction of outputting the notification packet to the
node device 20-3.
[0175] When receiving the notification packet from the node device
20-4, the node device 20-3 transmits a packet-in message
incorporating the notification packet as inquiry information to the
controller 10-1, which controls the node device 20-3 via the
control channel.
[0176] The controller 10-1 receives the packet-in message from the
node device 20-4 and obtains the notification packet incorporated
in the packet-in message. This allows the controller 10-1 to
recognize the identifying conditions (or the identifying rule) of
packets used to communicate with the controller 10-2.
[0177] (Operation for Setting Processing Rules in First
Embodiment)
[0178] An exemplary operation for setting processing rules (or flow
entries) in the first embodiment is described below.
[0179] The identifying condition calculation section 115 in each of
the controllers 10-1 and 10-2 calculates identifying conditions (or
identifying rules) of packets used to communicate with the
controller 10 in the different network, on the basis of the
topology information and the boundary information of its own
network which are stored in the topology management section 111.
Also, the identifying condition calculation section 115 recognizes
identifying conditions (or an identifying rule) of packets notified
by the controller 10 in the different network.
[0180] The route calculation section 116 in each of the controllers
10-1 and 10-2 determines the end points of a route to be used in
the communication, on the basis of the topology information and the
boundary information of the network stored in the topology
management section 111.
[0181] At this time, one of the end points of the route is defined
as the node device at the boundary of the different network. In the
network 1, the node device 20-3 is defined as one of the end
points. In the network 2, the node device 20-4 is defined as one of
the end points.
[0182] Also, the other end point of the route is defined as the
node device connected to the controller via a data communication
link. In the network 1, the node device 20-1 is defined as the
other end point. In the network 2, the node device 20-6 is defined
as the other end point.
[0183] The route calculation section 116 in each of the controllers
10-1 and 10-2 calculates a route which connects the end points
thus-determined. Specifically, the route calculation section 116
determines a bi-directional transfer route which connects the
interfaces of the determined node devices 20.
[0184] In this embodiment, the route calculation section 116 in the
controller 10-1 calculates the route which connects the interface
unit used for connection to controller 10-1 in the node device 20-1
in the network 1 to the interface unit used for connection to the
node device 20-4 in the network 2 in the node device 20-3.
[0185] Similarly, the route calculation section 116 in the
controller 10-2 calculates the route which connects the interface
unit used for connection to controller 10-2 in the node device 20-6
in the network 2 and the interface unit used for connection to the
node device 20-3 in the network 1 in the node device 20-4.
[0186] The processing rule calculation section 117 in each of the
controllers 10-1 and 10-2 obtains the route to be used for transfer
from the route calculation section 116 and obtains the identifying
conditions (or the identifying rules) of packets transferred on the
route from the identifying condition calculation section 115.
[0187] The processing rule calculation section 117 in each of the
controllers 10-1 and 10-2 uses the obtained information to
calculate processing required to transfer the packets in each node
device 20 on the route, and calculates the processing rules (or the
flow entries) in which the packet identifying conditions (or the
packet identifying rules) are defined as conditions under which the
processing is to be performed. At this time, the processing rule
calculation section 117 incorporates the identification information
(processing rule ID) into each calculated processing rule (or flow
entry). Specifically, the processing rule ID is incorporated in a
cookie region (a region of 64 bits in which any information can be
stored) in each processing rule (or flow entry).
[0188] The processing rule management section 118 in each of the
controllers 10-1 and 10-2 registers information related to the
processing rules (or the flow entries) calculated by the processing
rule calculation section 117 into the processing rule storage
section 119, and requests the control message processing section
112 to set processing rules (or the flow entries) for each node
devices 20 on the route.
[0189] When receiving the setting request of processing rules (or
flow entries) for the node devices 20, the control message
processing section 112 in each of the controllers 10-1 and 10-2
prepares flow modification messages (or FlowMod-Add) in order to
register the processing rules (or the flow entries) in the node
devices 20. After that, the control message processing section 112
requests the node communication section 113 to transmit the flow
modification messages (or FlowMod-Add) to the node devices 20 on
the route. The flow modification messages (or FlowMod-Add) are a
sort of control messages.
[0190] When receiving the transmission request of the flow
modification messages (FlowMod-Add) to the node devices 20, the
node communication section 113 in each of the controllers 10-1 and
10-2 transmits the flow modification messages (FlowMod-Add) to the
node devices 20 on the route via the control channels.
[0191] When receiving a flow modification message (FlowMod-Add)
from the controller 10 via the control channel, the communication
unit 21 in each node device 20 registers the processing rules (or
the flow entries) on the basis of the contents of the flow
modification message (or FlowMod-Add).
[0192] When then receiving a packet through a interface unit 22,
the communication unit 21 in each node device 20 processes the
received packet in accordance with the contents of the registered
processing rules (or the flow entries).
[0193] (Summary of First Embodiment)
[0194] In this way, the above-described operation for setting the
processing rules (or the flow entries) is carried out independently
in the networks 1 and 2. As result, a bidirectional transfer route
is established between the controllers 10.
[0195] Each of the controllers 10-1 and 10-2 determines the
identifying conditions (or the identifying rules) of packets used
in the mutual communications.
[0196] Each of the controllers 10-1 and 10-2 calculates the
bidirectional route within its own network and specifies the node
devices on the route and the interface units thereof.
[0197] Each of the controllers 10-1 and 10-2 prepares processing
rules (or flow entries) that defines a set of identifying
conditions (or identifying rules) and contents of processing (or
actions) which involves outputting packets to the interface unit
connected to the adjacent node device, for the node devices on the
route.
[0198] In this embodiment, each of the controllers 10-1 and 10-2
sets the node devices 20 under the control of the each controller
10 with processing rules (or the flow entries) which allows
transferring packets from the node device at the boundary with the
adjacent network to the node device connected to the controller 10
via the data communication link. For example, the controller 10-1
sets each of the node devices 20-1 to 20-3 with processing rules
(or flow entries) which allows transfer of packets from the node
device 20-3 to the node device 20-1. The controller 10-2 sets each
of the node devices 20-4 to 20-6 with processing rules (or flow
entries) which allows transfer of packets from the node device 20-4
to the node device 20-6.
[0199] Also, each of the controllers 10-1 and 10-2 sets the node
devices 20 under its control with processing rules (or flow
entries) which allows transfers of packets from the node device
connected via the data communication link to the interface unit of
the node device at the boundary with the adjacent network. For
example, the controller 10-1 sets each of the node devices 20-1 to
20-3 with processing rules (or flow entries) which allows transfer
of packets from the node device 20-1 to the node device 20-3. The
controller 10-2 sets each of the node devices 20-4 to 20-6 with
processing rules (or flow entries) which allows transfer of packets
from the node device 20-6 to the node device 20-4.
[0200] Since the above-described operations are carried out in each
of the adjacent networks, the transfer route is established in each
of the networks, this enables communications between the
controllers without direct transactions between adjacent
networks.
[0201] Note that the link connection implies connection through a
data communication link.
[0202] (Format of Processing Rule)
[0203] An exemplary format of the processing rules (or the flow
entries) is described with reference to FIG. 6A and FIG. 6B.
[0204] A processing rule (or a flow entry) includes an identifying
condition region (or an identifying rule region) and a
contents-of-processing region (or an action region).
[0205] The identifying condition region (or the identifying rule
region) includes the "source MAC address" region, a "destination
MAC address" region and an "ether type" region (or an "eth_type"
region).
[0206] The "source MAC address" region contains the MAC address of
the network interface of the source of the packet.
[0207] The "destination MAC address" region contains the MAC
address of the network interface of the destination of the
packet.
[0208] The "ether type" region contains information indicating the
ether type of the packet.
[0209] The "contents-of-processing" region include an "action"
region.
[0210] The "action" region contains an instruction of outputting
the packet to a predetermined output destination.
[0211] In this embodiment, each controller is allowed to flexibly
use various types of packets for each counterpart controller.
[0212] (Processing Rule in Network 1) FIG. 6A shows an example of
processing rules (or flow entries) respectively set to the node
devices 20-1 to 20-3 from the controller 10-1 in the network 1.
[0213] (1) Processing Rule (Flow Entry) from Node Device 20-1 to
Controller 10-1
[0214] Source MAC address:
[0215] MAC address of controller 10-2
[0216] Destination MAC address:
[0217] MAC address of controller 10-1
[0218] Ether Type (eth_type): 0x8800
[0219] Action: output to the controller 10-1
[0220] (2) Processing Rule (Flow Entry) from Node Device 20-1 to
Node Device 20-2
[0221] Source MAC Address:
[0222] MAC address of controller 10-1
[0223] Destination MAC address:
[0224] MAC address of controller 10-2
[0225] Ether Type (eth_type): 0x8800
[0226] Action: output to node device 20-2
[0227] (3) Processing Rule (Flow Entry) from Node Device 20-2 to
Node Device 20-1
[0228] Source MAC address:
[0229] MAC address of controller 10-2
[0230] Destination MAC address:
[0231] MAC address of controller 10-1
[0232] Ether Type (eth_type): 0x8800
[0233] Action: output to node device 20-1
[0234] (4) Processing Rule (Flow Entry) from Node Device 20-2 to
Node Device 20-3
[0235] Source MAC address:
[0236] MAC address of controller 10-1
[0237] Destination MAC address:
[0238] MAC address of controller 10-2
[0239] Ether type (eth_type): 0x8800
[0240] Action: output to node device 20-3
[0241] (5) Processing Rule (Flow Entry) from Node Device 20-3 to
Node Device 20-2
[0242] Source MAC address:
[0243] MAC address of controller 10-2
[0244] Destination MAC address: MAC Address of controller 10-1
[0245] Ether type (eth_type): 0x8800
[0246] Action: output to node device 20-2
[0247] (6) Processing Rule (Flow Entry) from Node Device 20-3 to
Node Device 20-4
[0248] Source MAC address]:
[0249] MAC address of controller 10-1
[0250] Destination MAC address:
[0251] MAC Address of controller 10-2
[0252] Ether type (eth_type): 0x8800
[0253] Action: output to node device 20-4
[0254] The above-described processing rules (1) and (2) are
processing rules (or flow entries) used in the node device
20-1.
[0255] The above-described processing rules (3) and (4) are
processing rules (or flow entries) used in the node device
20-2.
[0256] The above-described processing rules (5) and (6) are
processing rules (or flow entries) used in the node device
20-3.
[0257] (Processing Rule in Network 2)
[0258] FIG. 6B shows an example of the processing rule (or the flow
entry) set to each of the node devices 20-4 to 20-6 from the
controller 10-2 in the network 2.
[0259] (1) Processing Rule (Flow Entry) from Node Device 20-4 to
Node Device 20-3
[0260] Source MAC address:
[0261] MAC address of controller 10-2
[0262] Destination MAC Address:
[0263] MAC address of controller 10-1
[0264] Ether Type (eth_type): 0x8800
[0265] Action: output to node device 20-3
[0266] (2) Processing Rule (Flow Entry) from Node Device 20-4 to
Node Device 20-5
[0267] Source MAC address:
[0268] MAC Address of controller 10-1
[0269] Destination MAC address:
[0270] MAC Address of controller 10-2
[0271] Ether type (eth_type): 0x8800
[0272] Action: output to node device 20-5
[0273] (3) Processing Rule (Flow Entry) from Node Device 20-5 to
Node Device 20-4
[0274] Source MAC address:
[0275] MAC address of controller 10-2
[0276] Destination MAC address:
[0277] MAC Address of controller 10-1
[0278] Ether Type (eth_type): 0x8800
[0279] Action: output to node device 20-4
[0280] (4) Processing Rule (Flow Entry) from Node Device 20-5 to
Node Device 20-6
[0281] Source MAC address]:
[0282] MAC address of controller 10-1
[0283] Destination MAC Address:
[0284] MAC address of controller 10-2
[0285] Ether Type (eth_type): 0x8800
[0286] Action: output to node device 20-6
[0287] (5) Processing Rule (Flow Entry) from Node Device 20-6 to
Node Device 20-5
[0288] Source MAC address:
[0289] MAC address of controller 10-2
[0290] Destination MAC address:
[0291] MAC Address of controller 10-1
[0292] Ether type (eth_type): 0x8800
[0293] Action: output to node device 20-5
[0294] (6) Processing Rule (Flow Entry) from Node Device 20-6 to
Controller 10-2
[0295] Source MAC address]:
[0296] MAC Address of controller 10-1
[0297] Destination MAC address:
[0298] MAC Address of controller 10-2
[0299] Ether type (eth_type): 0x8800
[0300] Action: output to controller 10-2
[0301] The above-described processing rules (1) and (2) are
processing rules (or flow entries) used in the node device
20-4.
[0302] The above-described processing rules (3) and (4) are
processing rules (or flow entries) used in the node device
20-5.
[0303] The above-described processing rules (5) and (6) are
processing rules (or flow entries) used in the node device
20-6.
[0304] (Supplement) The MAC address of the controller 10-1 means
the MAC address of the network interface in the controller 10-1.
Also, the MAC address of the controller 10-2 means the MAC address
of the network interface in the controller 10-2.
[0305] It should be noted that the MAC address is merely one
exemplary identifier used to identify the source and the
destination. The IP address or the like may be used in place of the
MAC address.
Second Embodiment
[0306] A description is given of a second embodiment of the present
invention in the following.
[0307] In this embodiment, identifying conditions (or identifying
rules) are also determined in the topology retrieval by using LLDP
frames. Specifically, identifying conditions (or identifying rules)
of packets are embedded in LLDP frames in addition to
identification information (or the controller ID) of the controller
when LLDP frames are prepared in the controller.
[0308] (Format of LLDP Frame in Second Embodiment)
[0309] An exemplary format of the LLDP frame in this embodiment is
described with reference to FIG. 7.
[0310] In this embodiment, each LLDP frame includes an "LLDP
header" region (or field) and an "optional TLVs" region.
[0311] The contents of the "LLDP header" region in the second
embodiment are same as those in the first embodiment.
[0312] The "optional TLVs" region includes an "identification
information" region (or a "controller ID" region, a "MAC address"
region and an "ether type" region (or an "eth_type" region).
[0313] The "identification information" region contains
identification information (or the controller ID) of the controller
which prepares the LLDP frame.
[0314] The "MAC address" region contains a MAC address which is one
of identifying conditions (or identifying rules) of packets.
[0315] The "ether type (eth_type)" region contains information that
indicates an ether type which is one of identifying conditions (or
identifying rules) of packets.
[0316] It should be noted that the MAC address is merely one
exemplary identifier used to identify the source and the
destination. The IP address or the like may be used in place of the
MAC address.
[0317] That is, in this embodiment, identifying conditions (or
identifying rules) of packets are stored in the "optional TLVs"
region in each LLDP frame.
[0318] (System Configuration)
[0319] This embodiment is described on the basis of the system
configuration shown in FIG. 2, similarly to the first
embodiment.
[0320] The configurations of the controllers 10 and the node
devices 20 are basically similar to those of the first
embodiment.
[0321] (Operation of Topology Retrieval in Second Embodiment) An
exemplary operation for topology retrieval in the second embodiment
is described below.
[0322] First, the topology management section 111 in each of the
controllers 10-1 and 10-2 prepares LLDP frames as topology
retrieval packets, similarly to the first embodiment.
[0323] At this moment, the topology management section 111 in each
of the controllers 10-1 and 10-2 incorporates identification
information (or the controller ID) which indicates its own
controller in each LLDP frame, the identification information being
unique among the networks.
[0324] Here, the topology management section 111 in each of the
controllers 10-1 and 10-2 incorporates the identification
information (or the controller ID) of its own controller 10 into
the "identification information" region in the "optional TLVs"
region in each LLDP frame as shown in FIG. 7.
[0325] The topology management section 111 in each of the
controllers 10-1 and 10-2 outputs the prepared LLDP frames to the
identifying condition calculation section 115.
[0326] The identifying condition calculation section 115 in each of
the controllers 10-1 and 10-2 calculates identifying conditions (or
identifying rules) of packets used to communicate with the
controller 10 in the different network, on the basis of the
topology information and the boundary information of the networks
which are stored in the topology management section 111.
[0327] In this embodiment, the identifying condition calculation
section 115 in each of the controllers 10-1 and 10-2 incorporates
identifying conditions (or identifying rules) of packets used to
communicate with the adjacent node device 20 by each node device 20
into each LLDP frame.
[0328] In this embodiment, the identifying condition calculation
section 115 in each of the controllers 10-1 and 10-2 incorporates
the MAC address of the network interface within its own controller
10 in the "MAC address" region of the "optional TLVs" region in
each LLDP frame as shown in FIG. 7.
[0329] Also, the identifying condition calculation section 115 in
each of the controllers 10-1 and 10-2 incorporates information
indicating the ether type of packets used to communicate with the
controller 10 in the different network in the "ether type" region
of the "optional TLVs" region in each LLDP frame as shown in FIG.
7.
[0330] At this moment, the identification information (or the
controller ID) of the controller and identifying conditions (or
rules) of packets are contained in each LLDP frame.
[0331] The identifying condition calculation section 115 in each of
the controllers 10-1 and 10-2 requests the control message
processing section 112 to instruct transmission of the LLDP frame
to each node device 20. In an actual implementation, the
identifying condition calculation section 115 may only output the
LLDP frames to the control message processing section 112. It
should be noted that, in an actual implementation, the identifying
condition calculation section 115 may return the LLDP frames in
which identifying conditions (or rules) of packets are embedded to
the topology management section 111. In this case, similarly to the
first embodiment, the topology management section 111 requests the
control message processing section 112 to instruct transmission of
the LLDP frame to each node device 20.
[0332] The control message processing section 112 in each of the
controllers 10-1 and 10-2 prepares the packet-out message
incorporating the LLDP frame and the transmission instruction, in
response to the request for instructing the transmission of the
LLDP frame to each node device 20, and requests the node
communication section 113 to transmit the packet-out message to all
of the node devices 20 in its own network. The packet-out message
includes transmission instructions which instruct to transmit the
LLDP frame from all of the interface units 22 in each node device
20.
[0333] The node communication section 113 in each of the
controllers 10-1 and 10-2 transmits the packet-out message from the
control message processing section 112 to all of the node devices
20 in its own network.
[0334] The communication unit 21 in each node device 20 transmits
the LLDP frame to the connection destinations from all of the
interface units 22 in the node device 20. Also, the communication
unit 21 receives the LLDP frames from the connection destinations
via the interface units 22.
[0335] The communication unit 21 in each node device 20 transmits
the packet-in message as inquiry information to the controller 10
that controls the node device 20 via the control channel. At this
time, the communication unit 21 in each node device 20 incorporates
the LLDP frame, interface information of the interface unit that
receives the LLDP frame (or (the interface unit ID), and
identification information of the node device (or the node device
ID) in the packet-in message.
[0336] Usually, the communication unit 21 in each node device 20
receives an LLDP frame having the identification information (or
the controller ID) of the controller 10 in the same network from
the node device 20 in the same network, and transmits a packet-in
message incorporating the LLDP frame as inquiry information to the
controller 10.
[0337] When the source is the node device 20-3 and the destination
is the node device 20-4, however, an LLDP frame having
identification information (the controller ID) of the controller
10-1 is transmitted to the controller 10-2. That is, when receiving
an LLDP frame having identification information (or the controller
ID) of the controller 10-1 from the node device 20-3, the
communication unit 21 in the node device 20-4 transmits a packet-in
message incorporating the LLDP frame as inquiry information to the
controller 10-2.
[0338] When the source is the node device 20-4 and the destination
is the node device 20-3, on the other hand, an LLDP frame having
identification information (or the controller ID) of the controller
10-2 is transmitted to the controller 10-1. That is, when receiving
the LLDP frame having identification information (or the controller
ID) of the controller 10-2 from the node device 20-4, the
communication unit 21 in the node device 20-3 transmits a packet-in
message incorporating the LLDP frame as inquiry information to the
controller 10-1.
[0339] The node communication section 113 in each of the
controllers 10-1 and 10-2 receives the above-described packet-in
message from each node device 20 of the connection destination
through the control channel and outputs the packet-in message to
the control message processing section 112.
[0340] The control message processing section 112 in each of the
controllers 10-1 and 10-2 analyzes the received packet-in messages
and extracts the LLDP frames incorporated in the packet-in
messages. The control message processing section 112 outputs the
extracted LLDP frames to the adjacency discovery section 114 and
the identifying condition calculation section 115.
[0341] The adjacency discovery section 114 in each of the
controllers 10-1 and 10-2 operates similarly to the first
embodiment.
[0342] In this embodiment, the identifying condition calculation
section 115 in each of the controllers 10-1 and 10-2 determines
whether the identification information (controller ID) contained in
each LLDP frame is identical to the identification information
(controller ID) of its own controller 10.
[0343] Here, the identifying condition calculation section 115 in
each of the controllers 10-1 and 10-2 refers to the "identification
information" region (or the "controller ID" region) of the
"optional TLVs" region of the LLDP frame (see FIG. 7), and
determines whether the identification information (or the
controller ID) contained in the "identification information" region
(or the "controller ID" region) is identical to the identification
information (or the controller ID) of its own controller 10.
[0344] If the identification information (controller ID) contained
in an LLDP frame differs from the identification information
(controller ID) of its own controller 10, the identifying condition
calculation section 115 in each of the controllers 10-1 and 10-2
determines that the LLDP frame results from the controller 10 in
the different network. In this case, the identifying condition
calculation section 115 determines that the identifying conditions
(or the rules) contained in the LLDP frame are the identifying
conditions (or the identifying rules) of packets used to
communicate with the controller 10 in the different network.
[0345] In this embodiment, the identifying condition calculation
section 115 in each of the controllers 10-1 and 10-2 refers to the
"MAC address" region of the "optional TLVs" region in the LLDP
frame (see FIG. 7) and obtains the MAC address of the network
interface in the controller 10 in the different network.
[0346] Also, the identifying condition calculation section 115 in
each of the controllers 10-1 and 10-2 refers to the "ether type"
region (or the "eth_type" region) of the "optional TLVs" region in
the LLDP frame shown in FIG. 7, and obtains information that
indicates the ether type of packets used to communicate with the
controller 10 in the different network.
[0347] In this way, an LLDP frame prepared by each of the
controllers 10-1 and 10-2 is transmitted from the interface units
of each node device and this allows discovering of the network
adjacency and recognizing the identifying conditions (or the rules)
in the topology retrieval at the same time.
Third Embodiment
[0348] A third embodiment of the present invention is described
below.
[0349] The system configuration of this embodiment is identical to
that shown in FIG. 1.
[0350] The system configuration of this embodiment differs from the
system configuration of the first embodiment shown in FIG. 2 in
that the controllers 10 and the node devices 20 are logically
connected via control channels; the interfaces are not physically
connected to each other via data communication links.
[0351] Each controller 10 communicates with the node device 20 at
the boundary with a different network. That is, the controller 10-1
communicates with the node device 20-3 via a control channel. The
controller 10-2 communicates with the node device 20-4 via a
control channel.
[0352] Each node device 20 at the boundary with a different network
communicates with the node device 20 in the different network
through a data communication link. In this embodiment, the node
device 20-3 communicates with the node device 20-4 through a data
communication link. The node device 20-4 communicates with the node
device 20-3 through the data communication link.
[0353] In this embodiment, each controller 10 sets the node device
20 located on the boundary with a different network with processing
rules (or flow entries) which instruct to, when receiving a packet
addressed to the controller 10 in its own network from a different
network, transfer the packet via the control channel to the
controller 10 in its own network. That is, the controller 10-1 sets
the node device 20-3 with processing rules (or flow entries) which
instruct the node device 20-3 to transfer a packet addressed to the
controller 10-1 when receiving the packet from the controller 10-2.
The controller 10-2 sets the node device 20-4 with processing rules
(or flow entries) which instruct the node device 20-3 to transfer a
packet addressed to the controller 10-2 when receiving the packet
from the controller 10-1.
[0354] This allows communications between the controllers 10-1 and
10-2.
[0355] (Configuration of Controller in Third Embodiment)
[0356] An exemplary configuration of the controller 10 in this
embodiment is described with reference to FIG. 8. Note that this
exemplary configuration is common to the controllers 10-1 and
10-2.
[0357] The controller 10 includes a node device control section 11,
a sorting section 12, a sorting rule storage section 13, one or
more virtual ports 14 (one shown) and an interface management
section 15.
[0358] The node device control section 11 and the interface
management section 15 basically have same functions as those in the
first embodiment,
[0359] The node device control section 11 controls the node devices
20 connected thereto via the control channels. For example, the
node device control section 11 executes a software program which
allows operating as an OpenFlow controller (OFC) in an OpenFlow
network. Here, the node device control section 11 monitors and
manages each interface unit contained in each node device 20 via
the control channel, and sets each node device 20 with processing
rules (or flow entries) for packets transmitted and received in its
interface units. One example of the contents of a processing rule
(a flow entry) of a packet may include an instruction to output to
a interface unit or a node device control section 11 specified on
the basis of the feature of the packet received by the interface
unit, an instruction to, when a packet prepared by the node device
control section 11 is received, output the packet to a specified
interface unit, and the like.
[0360] In this embodiment, when receiving a packet from a node
device 20, the node device control section 11 outputs the packet to
the sorting section 12 after attaching information that identifies
the node device 20 and the interface unit 22 of the source of the
packet. Also, when receiving a packet from the sorting section 11,
the node device control section 11 analyzes information attached to
the packet that indicates the node device 20 and the interface unit
22 of the source of the packet and performs control to output the
packet from a specified interface unit 22 in a specified node
device 20 by issuing an output instruction specifying a proper
control channel.
[0361] The sorting section 12, upon receiving a packet from a
virtual port 14, refers to sorting rules stored in the sorting rule
storage section 13 to specify a node device 20 and an interface
unit 22 which are suitable as the output destination, on the basis
of sorting conditions, which may include the feature of the packet,
and identification information of the virtual port of the source of
the packet, and outputs a message incorporating the packet and an
output instruction to the specified node device 20 and interface
unit 22 to the node device control section 11. Also, when receiving
a packet from the node device control section 11, the sorting
section 12 refers to the sorting rules stored in the sorting rule
storage section 13 to specify a virtual port suitable for the
output destination, on the basis of sorting conditions, which may
include the feature of the packet, identification information of
the node device of the source, and identification information of
the interface section and the like, and outputs the packet to the
specified virtual port. In an actual implementation, the sorting
section 12 may be attained by installing an existing virtual
machine monitor (VMM), a hypervisor or the like with the
above-described functions.
[0362] The sorting rule storage section 13 stores the sorting rules
of packets exchanged between the virtual ports in the controller 10
and the interface units in the respective node devices 20. Here,
the sorting rule storage section 13 stores information required to
sort packets as the sorting rules. The sorting rule storage section
13 returns and provides suitable sorting rules in response to a
reference request from the node device control section 11 and the
sorting section 12. For example, the sorting rule storage section
13 stores information which correlates virtual ports with interface
units in a one-to-one relationship; and information which
correlates features of packets (source and destination addresses,
the kind of packets and the like) with virtual ports and interface
units in a one-to-one relationship. The sorting rule storage
section 13 may be attained by using a RDB (relational data base).
It should be noted that the relations described in the sorting
rules may be arbitrarily modified in accordance with the OS
(Operation System), software and the like used in the computer
which operates as the controller 10, or may be manually modified in
accordance with a user operation. For example, the relations
described in the sorting rules may be dynamically modified as a
part of the QoS control. It should be noted, however, these are
merely examples and an actual implementation is not limited to
these examples.
[0363] The virtual ports 14 are each a virtual network interface
provided in the controller 10. The virtual ports 14 are recognized
as an entity equivalent to a physical network interface or treated
in the same way as a physical network interface, by the OS
(operating system) of the computer operating as the controller 10
and the like. This implies that the virtual ports 14 are each
capable of transmitting and receiving packets. For example, the
virtual ports 14 may be implemented as a virtual device, such as a
TUN/TAP installed in the OS (Operating System) or other software.
Also, individual virtual machines (VM) operating on the controller
10 may each contain an OS (Operating System) and a virtual port 14.
It should be noted, however, an actual implementation is not
limited to these examples
[0364] The interface management section 15 manages the network
interface in the controller 10. Here, the interface management
section 15 is connected to the virtual ports 14 and capable of
communicating with the virtual ports 14.
[0365] In this embodiment, the node device control section 11 is
connected to the sorting section 12, the sorting rule storage
section 13 and the interface management section 15.
[0366] It should be noted the interface management section 15 may
be connected to any of the node devices 20 via a data communication
link, similarly to the first embodiment.
[0367] (Configuration of Node Device Control Section)
[0368] An exemplary configuration of the node device control
section 11 of this embodiment will be described below.
[0369] In this embodiment, the configuration of the node device
control section 11 is basically identical to that of the first
embodiment.
[0370] The node device control section 11 includes a topology
management section 111, a control message processing section 112, a
node communication section 113, an adjacency discovery section 114,
an identifying condition calculation section 115, a route
calculation section 116, a processing rule calculation section 117,
a processing rule management section 118 and a processing rule
storage section 119, similarly to that shown in FIG. 3.
[0371] The topology management section 111, the control message
processing section 112, the node communication section 113, the
adjacency discovery section 114, the identifying condition
calculation section 115, the route calculation section 116, the
processing rule calculation section 117, the processing rule
management section 118 and the processing rule storage section 119
basically operate similarly to those of the first embodiment.
[0372] It should be noted, however that, in this embodiment, the
control message processing section 112 is further connected to the
sorting section 12 and the sorting rule storage section 13.
[0373] (Operation of Topology Retrieval in Third Embodiment)
[0374] In one operation example, the topology retrieval in this
embodiment is basically achieved in the procedure similar to that
of the first embodiment or the second embodiment.
[0375] For example, the operation of the topology retrieval of this
embodiment may involve the operations described in the sections
entitled "(1) Operation for Transmitting LLDP Frame", "(2)
Operation for Transmitting Inquiry Information of LLDP Frame via
Control Channel", and "(4) Operation for Preparing Topology
Information"; these sections describe the exemplary operation of
the topology retrieval in the first embodiment.
[0376] This allows each controller 10 to dynamically recognize and
monitor existences of a controller 10 in a different network, a
node device 20 located on the boundary with the different network,
and its interface unit 22.
[0377] (Operation for Setting Processing Rule in Third
Embodiment)
[0378] An exemplary operation of setting processing rules (or flow
entries) in the third embodiment is described below.
[0379] The identifying condition calculation section 115 in each of
the controllers 10-1 and 10-2 calculates identifying conditions (or
rules) of packets used to communicate with the controller 10 in a
different network, on the basis of the topology information and the
boundary information of the network, which are stored in the
topology management section 111.
[0380] The route calculation section 116 in each of the controllers
10-1 and 10-2 determines the end points of a route used in the
communication, on the basis of the topology information and the
boundary information of the network, which are stored in the
topology management section 111.
[0381] Here, each end point of the route is defined as a node
device located on the boundary with a different network. In the
network 1, the node device 20-3 is defined as one end point. In the
network 2, the node device 20-4 is defined as the other end
point.
[0382] The route calculation section 116 in each of the controllers
10-1 and 10-2 calculates the route which connects the controller 10
to the node device 20 located on the boundary with a different
network and through the control channel.
[0383] The processing rule calculation section 117 in each of the
controllers 10-1 and 10-2 obtains the route used for packet
transfer from the route calculation section 116 and obtains the
identifying conditions (or the identifying rules) of packets
transferred on the route from the identifying condition calculation
section 115.
[0384] The processing rule calculation section 117 in each of the
controllers 10-1 and 10-2 uses the obtained information and
calculates processing rules (or flow entries) to be set to the node
device 20 located on the boundary with the different network. At
this time, the processing rule calculation section 117 incorporates
identification information (or the processing rule ID) of each
processing rule (flow entry), into each processing rule (or flow
entry). Specifically, the processing rule ID is stored in a cookie
region in each processing rule (or flow entry). Also, contents of
processing (or the actions) of the processing rules (or the flow
entries) are defined to instruct an operation in which, when
receiving a packet addressed to the controller 10 in its own
network from the controller 10 in a different network, a packet-in
message incorporating the packet and the processing rule ID is
transmitted to the controller 10 in its own network through the
control channel.
[0385] For example, the processing rule calculation section 117 in
the controller 10-1 calculates processing rules (or flow entries)
which instruct to, when receiving a packet addressed to the
controller 10-1 from the controller 10-2, transmit a packet-in
message incorporating the packet and a processing rule ID "A" to
the controller 10-1 via the control channel, as the processing
rules (or the flow entries) to be set to the node device 20-3.
[0386] Similarly, the processing rule calculation section 117 in
the controller 10-2 calculates processing rules (or flow entries)
which instruct to, when receiving a packet addressed to the
controller 10-2 from the controller 10-1, transmit a packet-in
message incorporating the packet and a processing rule ID "B" to
the controller 10-2 via the control channel, as the processing
rules (or the flow entries) to be set to the node device 20-4.
[0387] The processing rule management section 118 in each of the
controllers 10-1 and 10-2 registers information related to the
processing rules (or the flow entries) calculated by the processing
rule calculation section 117 into the processing rule storage
section 119, and requests the control message processing section
112 to set the node device 20 located on the boundary with the
different network with the processing rules (or the flow
entries).
[0388] The control message processing section 112 in each of the
controllers 10-1 and 10-2, upon receiving the request for setting
the node device 20 with the processing rules (or the flow entries),
prepares a flow modification message (FlowMod-Add) to register the
processing rules (or the flow entries) in the node device 20. The
control message processing section 112 then requests the node
communication section 113 to transmit the flow modification message
(FlowMod-Add) to the node device 20 located on the boundary with
the different network.
[0389] The node communication section 113 in each of the
controllers 10-1 and 10-2, upon receiving the transmission request
of the flow modification message (FlowMod-Add) to the node device
20 located on the boundary with the different network, transmits
the flow modification message (FlowMod-Add) to the node device 20
via the control channel.
[0390] The communication unit 21 in each node device 20, upon
receiving the flow modification message (FlowMod-Add) from the
controller 10 via the control channel, registers therein the
processing rules (or the flow entries) on the basis of the contents
of the flow modification message (FlowMod-Add).
[0391] Hereafter, the communication unit 21, upon receiving a
packet via the interface unit 22, processes the packet in
accordance with the contents of the registered processing rules (or
the flow entries).
[0392] (Operation for Notifying Processing Rules Between
Controllers in Third Embodiment)
[0393] An exemplary operation for notifying processing rules (flow
entries) between the controllers 10 in the third embodiment is
described below.
[0394] The control message processing section 112 in each of the
controllers 10-1 and 10-2 obtains the processing rules (the flow
entries) with regard to the communications between the controllers
10 from the processing rule management section 118, and obtains the
processing rules ID embedded in the processing rules (the flow
entries). The control message processing section 112 then prepares
a notification packet incorporating the processing rules ID. For
example, the control message processing section 112 in each of the
controllers 10-1 and 10-2 incorporates the processing rules ID in
the cookie region or data region of the notification packet. When
preparing the notification packet, the control message processing
section 112 defines the destination address as the address of the
controller 10 in the different network, and defines the source
address as the address of the controller 10 in its own network. It
should be noted that, the control message processing section 112,
may carry out this process at the same time, when preparing the
flow modification message (FlowMod-Add).
[0395] The control message processing section 112 in each of the
controllers 10-1 and 10-2 prepares a packet-out message which
incorporates the above-described notification packet and a
transmission instruction thereof.
[0396] The control message processing section 112 in each of the
controllers 10-1 and 10-2 requests the node communication section
113 to transmit the packet-out message to the node device 20
located on the boundary with the different network.
[0397] The node communication section 113 in each of the
controllers 10-1 and 10-2, when receiving the transmission request
of the packet-out message to the node device 20 located on the
boundary with the different network, transmits the packet-out
message to the node device 20 via the control channel. For example,
the node communication section 113 in the controller 10-1 transmits
the packet-out message to the node device 20-3. Similarly, the node
communication section 113 in the controller 10-2 transmits the
packet-out message to the node device 20-4.
[0398] When receiving the packet-out message from the controller 10
via the control channel, the communication unit 21 in the node
device 20 located on the boundary with the different network
transmits the notification packet from the interface unit 22 to the
node device 20 in the different network on the basis of the
contents of the packet-out message. For example, the communication
unit 21 in the node device 20-3 transmits the notification packet
to the node device 20-4 via the data communication link. Similarly,
the communication unit 21 in the node device 20-4 transmits the
notification packet to the node device 20-3 via the data
communication link.
[0399] Also, the communication unit 21 in the node device 20
located on the boundary with the different network, when receiving
a notification packet from the node device 20 in the different
network through the interface unit 22, transmits a packet-in
message which incorporates the notification packet as inquiry
information to the controller 10 that controls the node device 20
via the control channel. When receiving the notification packet
from the node device 20-4, for example, the communication unit 21
in the node device 20-3 transmits a packet-in message which
incorporates the notification packet to the controller 10-1 via the
control channel. Similarly, when receiving a notification packet
from the node device 20-3, the communication unit 21 in the node
device 20-4 transmits a packet-in message which incorporates the
notification packet to the controller 10-2 via the control
channel.
[0400] The node communication section 113 in each of the
controllers 10-1 and 10-2 receives the above-described packet-in
message from the node device 20 located on the boundary with the
different network via the control channel, and outputs the
packet-in message to the control message processing section
112.
[0401] The control message processing section 112 in each of the
controllers 10-1 and 10-2 extracts the notification packet stored
in the above packet-in message, and obtains the processing rules ID
that indicates the processing rules (flow entries) of the
controller 10 in the different network from the contents of the
notification packet.
[0402] For example, the controller 10-1 recognizes that a
processing rule identified by a processing rule ID "B" is used in
the controller 10-2 when transmitting a packet in which the
destination address is defined as the address of the controller
10-2 and the source address is defined as the address of the
controller 10-1. Similarly, the controller 10-2 recognizes that a
processing rule identified by a processing rule ID "A" is used in
the controller 10-1, when transmitting a packet in which the
destination address is defined as the address of the controller
10-1 and the source address is defined as the controller 10-2.
[0403] (Operation for Setting Sorting Rule in Third Embodiment)
[0404] An exemplary operation for setting the sorting rules in the
third embodiment is described below.
[0405] When receiving a setting request of processing rules (flow
entries) to the node device 20 located on the boundary with the
different network from the processing rule management section 118,
the control message processing section 112 in each of the
controllers 10-1 and 10-2 prepares a sorting rule and stores the
prepared sorting rule in the sorting rule storage section 13.
[0406] At this time, the control message processing section 112 in
each of the controllers 10-1 and 10-2 specifies identification
information (the node device ID) of the node device 20 located on
the boundary with the different network as information to be stored
in a "node device ID" region in the sorting rule. The control
message processing section 112 also specifies identification
information (or the interface ID) of the interface unit 22 used for
the connection to the node device 20 in the different network as
information to be stored in an "interface ID" region of the sorting
rule. The control message processing section 112 also specifies the
address of the controller 10 in its own network as information to
be stored in an "own address" region of the sorting rule. The
control message processing section 112 also specifies the address
of the controller 10 in the different network as information of a
"counterpart address" region of the sorting rule. The control
message processing section 112 specifies the processing rules ID
embedded in the processing rule (or the flow entry) as information
to be stored in a "processing rule ID" region of the sorting
rule.
[0407] Also, the sorting section 12 in each of the controllers 10-1
and 10-2 recognizes a virtual port 14 used to communicate with the
controller 10 in the different network, and specifies
identification information of the virtual port (or the virtual port
ID) as information to be stored in a "virtual port ID" region for
the above-described sorting rule.
[0408] It should be noted that the setting method of the sorting
rules is not limited to this method. For example, sorting rules may
be manually stored in the sorting rule storage section 13 in
advance by an operator's input or the like.
[0409] (Configuration of Sorting Rule)
[0410] An exemplary format of the sorting rules stored in the
sorting rule storage section 13 is described with reference to
FIGS. 9A and 9B.
[0411] A sorting rule includes a "virtual port ID" region, a "node
device ID" region, an "interface ID" region, an "own address"
region, a "counterpart address" region and a "processing rule ID"
region.
[0412] The "virtual port ID" region contains identification
information (or the virtual port ID) of the virtual port 14 used to
communicate with the controller 10 in a different network.
[0413] The "node device ID" region contains identification
information (or the node device ID) of the node device 20 located
on the boundary with the different network.
[0414] The "interface ID" region contains identification
information (or the interface ID) of the interface unit 22 used for
the connection to the node device 20 in the different network.
[0415] The "own address" region contains the address of its own
controller 10.
[0416] The "counterpart address" region contains the address of a
communication counterpart.
[0417] The "processing rule ID" region contains identification
information (or the processing rule ID) of a processing rule (or a
flow entry).
[0418] (Supplement)
[0419] The above-described addresses may be a MAC address, an IP
address, or other identification information.
[0420] Each of the addresses described in the "own address" region
and the "counterpart address" region may be a destination address
or a source address. When a case that the source address of a
packet transferred between the controllers 10 is the address
described in the "own address" region of a sorting rule and the
destination address is the address described in the "counterpart
address" region of the sorting rule, for example, this implies that
the packet matches with the sorting rule. Also, when a case that
the source address of a packet transferred between the controllers
10 is the address described in the "counterpart address" region of
a sorting rule and the destination address is the address described
in the "own address" region of the sorting rule, this implies that
the packet matches with the sorting rule.
[0421] In this embodiment, it is assumed that an "own address"
region contains the address of the controller 10 in its own
network, and a "counterpart address" region contains the address of
a controller 10 in a different network. Moreover, it is assumed
that the virtual ports 14 are assigned with addresses in each
controller 10. For example, a typical virtual machine (VM) may be
assigned with a virtual MAC address, a virtual IP address or the
like.
[0422] (Identification Information of Virtual Port and Interface
Unit in Third Embodiment)
[0423] Here, a virtual port 14 in the controller 10-1 is assigned
with a virtual port ID "VP1". A virtual port 14 in the controller
10-2 is assigned with a virtual port ID "VP2". The interface unit
22 used for the connection to the node device 20-4 in the node
device 20-3 is assigned with an interface ID "IF3". The interface
unit 22 used for the connection to the node device 20-3 in the node
device 20-4 is assigned with an interface ID "IF4".
[0424] (Sorting Rule in Controller in Network 1)
[0425] FIG. 9A describes an example of the sorting rule in the
controller 10-1 in the network 1.
[0426] (1) Sorting Rule That Uses Control Channel
[0427] Virtual port ID: VP1
[0428] Node device ID: DPID3
[0429] Interface ID'': IF3
[0430] Own address: address of controller 10-1
[0431] Counterpart address: address of controller 10-2
[0432] Processing rule ID: A
[0433] (Sorting Rule in Controller in Network 2)
[0434] FIG. 9B describes an example of a sorting rule in the
controller 10-2 in the network 2.
[0435] (1) Sorting Rule That Uses Control Channel
[0436] Virtual port ID: VP2
[0437] Node device ID: DPID4
[0438] Interface ID: IF4
[0439] Own address: address of controller 10-2
[0440] Counterpart address: address of controller 10-1
[0441] Processing rule ID: B
[0442] (Operation of Packet Transfer Between Controllers in Third
Embodiment)
[0443] An exemplary operation for packet transfer between the
controllers 10 in the third embodiment is described below.
[0444] The sorting section 12 in each of the controllers 10-1 and
10-2, when receiving a packet from a virtual port 14, determines
identification information (or the virtual port ID) of the virtual
port 14, the destination address and the source address of the
received packet.
[0445] The sorting section 12 in each of the controllers 10-1 and
10-2 searches the sorting rules stored in the sorting rule storage
section 13, using the identification information (virtual port ID)
of the virtual port 14, the destination address and the source
address as search keys, to specify a node device 20 and an
interface unit 22 which are suitable as the output destination of
the packet.
[0446] For example, the sorting section 12 in the controller 10-1
searches the sorting rules stored in the sorting rule storage
section 13, using the virtual port ID "VP1" of the virtual port 14,
the destination address (that is, the address of the controller
10-2), the source address (that is, the address of the controller
10-1) as search keys, and consequently specifies the node device ID
"DPID3" and the interface ID "IF3".
[0447] Similarly, the sorting section 12 in the controller 10-2
searches the sorting rules stored in the sorting rule storage
section 13 using the virtual port ID "VP2" of the virtual port 14,
the destination address (that is, the address of the controller
10-1) and the source address (that is, the address of the
controller 10-2) as search keys, and consequently specifies the
node device ID "DPID4" and the interface ID "IF4".
[0448] The sorting section 12 in each of the controllers 10-1 and
10-2 outputs a message incorporating the packet and an instruction
to output the packet to the specified node device 20 and interface
unit 22, to the control message processing section 112 in the node
device control section 11.
[0449] For example, the sorting section 12 in the controller 10-1
outputs a message incorporating the packet and an instruction to
output the packet to the node device 20-3 and the specified
interface unit 22 thereof, to the control message processing
section 112 in the node device control section 11.
[0450] Similarly, the sorting section 12 in the controller 10-2
outputs a message incorporating the packet and an instruction to
output the packet to the node device 20-4 and the specified
interface unit 22 thereof, to the control message processing
section 112 in the node device control section 11.
[0451] The control message processing section 112 in each of the
controllers 10-1 and 10-2 refers to the sorting rules stored in the
sorting rule storage section 13 to prepare a packet-out message
incorporating the packet and an instruction to transmit the packet,
and requests the node communication section 113 to transmit the
packet-out message to the node device 20 located on the boundary
with the different network.
[0452] The node communication section 113 in each of the
controllers 10-1 and 10-2 transmits the packet-out message received
from the control message processing section 112 to the node device
20 located on the boundary with the different network.
[0453] For example, the node communication section 113 in the
controller 10-1 transmits the packet-out message from the control
message processing section 112 to the node device 20-3.
[0454] Similarly, the node communication section 113 in the
controller 10-2 transmits the packet-out message from the control
message processing section 112 to the node device 20-4.
[0455] The communication unit 21 in the node device 20 located on
the boundary with the different network transmits the packet
incorporated in the above-described packet-out message from the
specified interface unit 22 to the node device 20 in the different
network.
[0456] For example, the communication unit 21 in the node device
20-3 transmits the packet incorporated in the above-described
packet-out message from the specified interface unit 22 to the node
device 20-4.
[0457] Similarly, the communication unit 21 in the node device 20-4
transmits the packet incorporated in the above packet-out message
from the specified interface unit 22 to the node device 20-4.
[0458] Also, the communication unit 21 in the node device 20
located on the boundary with the different network, when receiving
the packet from the node device 20 in the different network through
the interface unit 22, transmits a packet-in message as inquiry
information to the controller 10 that controls the its own node
device 20 via the control channel in accordance with the processing
rules (or the flow entries). At this time, the communication unit
21 incorporates the packet, the node device ID, the interface ID
and the processing rule ID into this packet-in message.
[0459] For example, the communication unit 21 in the node device
20-3, when receiving a packet from the node device 20-4 via the
interface unit 22, transmits a packet-in message as inquiry
information to the controller 10-1 that controls the node device
20-3 via the control channel in accordance with the processing
rules (or the flow entries). At this time, the communication unit
21 in the node device 20-3 incorporates the packet, the node device
ID "DPID3", the interface ID "IF3" and the processing rule ID "A",
into this packet-in message.
[0460] Similarly, the communication unit 21 in the node device
20-4, when receiving a packet from the node device 20-3 via the
interface unit 22, transmits a packet-in message as inquiry
information to the controller 10-2 that controls the node device
20-4 via the control channel in accordance with the processing
rules (or the flow entries). At this time, the communication unit
21 in the node device 20-4 incorporates the packet, the node device
ID "DPID4", the interface ID "IF4" and the processing rule ID "B",
into this packet-in message.
[0461] The node communication section 113 in each of the
controllers 10-1 and 10-2 receives the above-described packet-in
message from the node device 20 connected thereto via the control
channel, and outputs the packet-in message to the control message
processing section 112.
[0462] The control message processing section 112 in each of the
controllers 10-1 and 10-2 analyzes the above-described packet-in
message and obtains the packet, the node device ID, the interface
ID and the processing rule ID. At this time, the control message
processing section 112 can obtain the destination address and the
source address from the header information of the packet and the
like.
[0463] For example, the control message processing section 112 in
the controller 10-1 analyzes the above-described packet-in message
and obtains the packet, the node device ID "DPID3", the interface
ID "IF3" and the processing rule ID "A". At this time, the control
message processing section 112 can obtains the address of the
controller 10-1 as the destination address and the address of the
controller 10-2 as the source address from the header information
of the packet, and the like.
[0464] Similarly, the control message processing section 112 in the
controller 10-2 analyzes the above-described packet-in message and
obtains the packet, the node device ID "DPID4", the interface ID
"IF4" and the processing rule ID "B". At this time, the control
message processing section 112 can obtains the address of the
controller 10-2 as the destination address and the address of the
controller 10-1 as the source address from the header information
of the packet, and the like.
[0465] The control message processing section 112 in each of the
controller 10-1 and 10-2 searches the sorting rules stored in the
sorting rule storage section 13 by using at least one of the node
device ID, the interface ID, the destination address, the source
address and the processing rule ID as a search key(s), and
recognizes that the packet is addressed to a virtual port 14 in its
own network. The control message processing section 112 then
outputs the packet to the sorting section 12 together with the
information used as the search key(s). At this time, the control
message processing section 112 may output any or all of the node
device ID, the interface ID, the destination address, the source
address and the processing rule ID together with the packet, to the
sorting section 12.
[0466] The sorting section 12 in each of the controllers 10-1 and
10-2 searches the sorting rules stored in the sorting rule storage
section 13 using at least one of the node device ID, the interface
ID, the destination address, the source address and the processing
rule ID as a search key(s), and thereby recognizes that the packet
is addressed to a virtual port 14 in the controller 10 in its own
network, and then outputs the packet to the virtual port 14.
[0467] For example, the sorting section 12 in the controller 10-1
searches the sorting rules stored in the sorting rule storage
section 13 using the processing rule ID "A" as a search key, and
thereby specifies the virtual port ID "VP1". The sorting section 12
in the controller 10-1 then outputs the packet to the virtual port
14 identified by the virtual port ID "VP1".
[0468] Similarly, the sorting section 12 in the controller 10-2
searches the sorting rules stored in the sorting rule storage
section 13 by using the processing rule ID "B" as a search key, and
thereby specifies the virtual port ID "VP2". The sorting section 12
in the controller 10-2 then outputs the packet to the virtual port
14 identified by the virtual port ID "VP2".
[0469] (Supplement)
[0470] When searching the sorting rules stored in the sorting rule
storage section 13 using at least one of the node device ID, the
interface ID, the destination address, the source address and the
processing rule ID as a search key(s), the control message
processing section 112 and the sorting section 12 may use any one
of the node device ID, the interface ID, the destination address,
the source address and the processing rule ID, or any combination
of them, or all of them as the search key(s).
[0471] It should be noted that the sorting section 12 may
unconditionally specify the interface unit 22 in the node device 20
located on the boundary with the different network as the output
destination of the packet inputted from a predetermined virtual
port 14. Similarly, the sorting section 12 may unconditionally
specify a predetermined virtual port 14, as the output destination
of the packet from the interface unit 22 in the node device 20
located on the boundary with the different network.
[0472] For example, when receiving a packet from a virtual port 14,
the sorting section 12 may search the sorting rules stored in the
sorting rule storage section 13 using only the virtual port ID as
the search key, irrespectively of the destination address and the
source address of the inputted packet and specify a node device ID
and an interface ID which indicate the output destination. When
receiving a packet from the control message processing section 112,
on the other hand, the sorting section 12 may search the sorting
rules stored in the sorting rule storage section 13 using only the
node device ID and the interface ID as the search keys and specify
a virtual port ID which indicates the output destination.
[0473] This embodiment eliminates the need for physically
connecting interfaces of a controller and a node device via a data
communication link, effectively improving the flexibility in actual
implementations.
Fourth Embodiment
[0474] A fourth embodiment of the present invention is described
below.
[0475] In this embodiment, which is based on the third embodiment,
client processes are executed in one of the two controllers, and
server processes are executed in the other controller. At this
time, the two controllers communicate with each other via the node
devices under the control thereof.
[0476] (Configuration of Communication System in Fourth
Embodiment)
[0477] An exemplary configuration of the communications system of
this embodiment is described with reference to FIGS. 10A and 10B.
It should be noted that FIG. 10A shows an exemplary network
configuration of the network 1 and FIG. 10B shows an exemplary
network configuration of the network 2.
[0478] The communications system of this embodiment includes a
controller 10-1, a controller 10-2, node devices 20-1 to 20-6.
[0479] The controller 10-1 controls the node devices 20-1 to 20-3
and the controller 10-2 controls the node devices 20-4 to 20-6.
[0480] The node devices 20-1 to 20-3 are arranged in the network 1
and the node devices 20-4 to 20-6 are arranged in the network 2. At
least one of the node devices 20-1 to 20-3 is connected to at least
one of the node devices 20-4 to 20-6 via a data communication
link.
[0481] (Configuration of Controller according to Fourth
Embodiment)
[0482] Exemplary configurations of the controller 10-1 and the
controller 10-2 are described below.
[0483] The controllers 10-1 and 10-2 each correspond to the
controller 10 shown in FIG. 8.
[0484] The controller 10-1 includes a node device control section
11-1, a sorting section 12-1, a sorting rule storage section 13-1,
virtual ports 14-11 and 14-12, an interface management section 15-1
and a client section 16.
[0485] The controller 10-2 contains a node device control section
11-2, a sorting section 12-2, a sorting rule storage section 13-2,
virtual ports 14-21 and 14-22, an interface management section 15-2
and a server section 17.
[0486] The node device control section 11-1 and the node device
control section 11-2 each correspond to the node device control
section 11 shown in FIG. 8.
[0487] Here, the node device control section 11-1 controls each of
the node devices 20-1 to 20-3 via a control channel.
[0488] Also, the node device control section 11-2 controls each of
the node devices 20-4 to 20-6 via a control channel.
[0489] The sorting sections 12-1 and 12-2 each correspond to the
sorting section 12 shown in FIG. 8.
[0490] The sorting rule storage sections 13-1 and 13-2 each
correspond to the sorting rule storage section 13 shown in FIG.
8.
[0491] The virtual ports 14-11, 14-12, 14-21 and 14-22 each
correspond to the virtual port 14 shown in FIG. 8.
[0492] The interface management sections 15-1 and 15-2 each
correspond to the interface management section 15 shown in FIG. 8.
It should be noted that the interface management sections 15-1 and
15-2 may have the same function as the interface management section
15 shown in FIG. 3.
[0493] The client section 16 executes processes in the seventh
layer (the application layer or layer 7) out of the seven layers
defined in the OSI reference model. The client section 16 may be
realized by a client processing function provided by the OS
(operating system) or software of the computer operating as the
controller 10. Note that actual implementations of the client
section 16 are not limited to those examples.
[0494] The server section 17 executes processes in the seventh
layer (the application layer, layer 7) out of the seven layers
defined in the OSI reference model. The server section 17 may be
realized by a server processing function provided by the OS
(operating system) or software of the computer operating as the
controller 10. However, actual implementations of the server
section 17 are not limited to those examples.
[0495] Is should be noted that each of the controllers 10-1 and
10-2 may incorporate both of the client section 16 and the server
section 17.
[0496] (Connection Example of Virtual Port in Fourth
Embodiment)
[0497] In this embodiment, the virtual port 14-11 in the controller
10-1 is assigned with an address "AD11". The virtual port 14-11 is
provided between the sorting section 12-1 and the client section
16, allowing packet transfer between the sorting section 12-1 and
the client section 16. Also, the virtual port 14-12 in the
controller 10-1 is assigned with an address "AD12". The virtual
port 14-11 and the client section 16 may be installed in the same
virtual machine (VM). The virtual port 14-12 is provided between
the sorting section 12-1 and the interface management section 15-1,
allowing packet transfer between the sorting section 12-1 and the
interface management section 15-1. In an actual implementation, the
sorting section 12-1, the virtual ports 14-11, 14-12 and the
interface management section 15-1 may be realized as the functions
installed in the same OS (operating system) and software.
[0498] Similarly, the virtual port 14-21 in the controller 10-2 is
assigned with an address "AD21". The virtual port 14-21 is arranged
between the sorting section 12-2 and the server section 17,
allowing packet transfer between the sorting section 12-2 and the
server section 17. The virtual port 14-21 and the server section 17
may be installed in the same virtual machine (VM). Also, the
virtual port 14-22 in the controller 10-2 is assigned with an
address "AD22". The virtual port 14-22 is provided between the
sorting section 12-2 and the interface management section 15-2,
allowing packet transfer between the sorting section 12-2 and the
interface management section 15-2. In an actual implementation, the
sorting section 12-2, the virtual ports 14-21, 14-22 and the
interface management section 15-2 may be realized as the functions
installed in the same OS (Operating System) and software.
[0499] (Supplement)
[0500] Specifically, the above-described addresses "AD11" and
"AD21" may be defined as any region (field) which can be identified
in the OpenFlow technique, such as the MAC address, the IP address,
the TCP/UDP (User Datagram Protocol) port number and the like, or a
combination of them. It should be noted, however, actual
implementations are not limited to these examples.
[0501] (Identification Information of Virtual Port in Fourth
Embodiment)
[0502] The virtual port 14-11 in the controller 10-1 is assigned
with a virtual port ID "VP11". The virtual port 14-12 in the
controller 10-1 is assigned with a virtual port ID "VP12". The
virtual port 14-21 in the controller 10-2 is assigned with a
virtual port ID "VP21". The virtual port 14-22 in the controller
10-2 is assigned with a virtual port ID "VP22".
[0503] (Configuration of Node Device in Fourth Embodiment)
[0504] Exemplary configurations of the node devices 20-1 to 20-6
are described below.
[0505] The node devices 20-1 to 20-6 each correspond to the node
device 20 shown in FIG. 4.
[0506] The node device 20-1 includes a communication unit 21-1, an
interface unit 22-11 and an interface unit 22-12. The node device
20-2 includes a communication unit 21-2, an interface unit 22-21
and an interface unit 22-22. The node device 20-3 includes a
communication unit 21-3, an interface unit 22-31 and an interface
unit 22-32. The node device 20-4 includes a communication unit
21-4, an interface unit 22-41 and an interface unit 22-42. The node
device 20-5 includes a communication unit 21-5, an interface unit
22-51 and an interface unit 22-52. The node device 20-6 includes a
communication unit 21-6, an interface unit 22-61 and an interface
unit 22-62.
[0507] The communication unit 21-1 to 21-6 each correspond to the
communication unit 21 shown in FIG. 4.
[0508] The interface units 22-11, 22-12, 22-21, 22-22, 22-31,
22-32, 22-41, 22-42, 22-51, 22-52, 22-61 and 22-62 each correspond
to the interface units 22 shown in FIG. 4.
[0509] (Connection Example of Interface Unit in Fourth
Embodiment)
[0510] The interface unit 22-11 in the node device 20-1 is
connected to the interface management section 15-1 in the
controller 10-1. The interface unit 22-12 in the node device 20-1
is connected to the interface unit 22-21 in the node device 20-2.
The interface unit 22-22 in the node device 20-2 is connected to
the interface unit 22-31 in the node device 20-3. The interface
unit 22-32 in the node device 20-3 is connected to the interface
unit 22-41 in the node device 20-4. The interface unit 22-42 in the
node device 20-4 is connected to the interface unit 22-51 in the
node device 20-5. The interface unit 22-52 in the node device 20-5
is connected to the interface unit 22-61 in the node device 20-6.
The interface unit 22-62 in the node device 20-6 is connected to
the interface management section 15-2 in the controller 10-2.
[0511] The node device 20-3 is arranged in the network 1, and the
node device 20-4 is arranged in the network 2. This implies that
the interface unit 22-32 in the node device 20-3 and the interface
unit 22-41 in the node device 20-4 each serve as an interface unit
22 located on the boundary with a different network.
[0512] (Identification Information of Interface Unit According to
Fourth Embodiment)
[0513] The interface unit 22-11 is assigned with an interface ID
"IF11". The interface unit 22-12 is assigned with an interface ID
"IF12". The interface unit 22-21 is assigned with an interface ID
"IF21". The interface unit 22-22 is assigned with an interface ID
"IF22". The interface unit 22-31 is assigned with an interface ID
"IF31". The interface unit 22-32 is assigned with an interface ID
"IF32". The interface unit 22-41 is assigned with an interface ID
"IF41". The interface unit 22-42 is assigned with an interface ID
"IF42". The interface unit 22-51 is assigned with an interface ID
"IF51". The interface unit 22-52 is assigned with an interface ID
"IF52". The interface unit 22-61 is assigned with an interface ID
"IF61". The interface unit 22-62 is assigned with an interface ID
"IF62".
[0514] (Sorting Rule in Controller in Network 1)
[0515] FIG. 11A shows an example of the sorting rules in the
controller 10-1 in the network 1.
[0516] (1) Sorting Rule which Uses Control Channel
[0517] Virtual port ID: VP11
[0518] Node device ID: DPID3
[0519] Interface ID: IF32
[0520] Own address: AD11
[0521] Counterpart address: AD21
[0522] Processing rule ID: A1
[0523] (2) Sorting Rule which Uses Data Communication Link
[0524] Virtual port ID: VP12
[0525] Node device ID: DPID1
[0526] Interface ID: IF11
[0527] Own address: AD12
[0528] Counterpart address: AD22
[0529] Processing rule ID: A2
[0530] (Sorting Rule in Controller in Network 2)
[0531] FIG. 11B shows an example of the sorting rules in the
controller 10-2 in the network 2.
[0532] (1) Sorting Rule which Uses Control Channel
[0533] Virtual port ID: VP21
[0534] Node device ID: DPID4
[0535] Interface ID: IF41
[0536] Own address: AD21
[0537] Counterpart address: AD11
[0538] Processing rule ID: B1
[0539] (2) Sorting Rule which Uses Data Communication Link
[0540] Virtual port ID: VP22
[0541] Node device ID: DPID6
[0542] Interface ID: IF62
[0543] Own address: AD22
[0544] Counterpart address: AD12
[0545] Processing rule ID: A2
[0546] (Registration of Processing Rule (Flow Entry) to Node Device
20-3)
[0547] The node device control section 11-1 in the controller 10-1
searches the sorting rules stored in the sorting rule storage
section 13-1 and outputs a flow modification message (FlowMod-Add)
to set the node device 20-3 with a processing rule (or a flow
entry). Here, the destination address "AD11" and the source address
"AD21" are specified as the identifying conditions (or the
identifying rule) of the processing rule (or the flow entry). Also,
the contents of processing (or the action) of the processing rule
(flow entry) are specified to perform an operation of outputting a
packet-in message incorporates the packet and the processing rule
ID "A1" to the controller 10-1 (or the output port connected to the
controller 10-1). That is, the node device control section 11-1
outputs to the node device 20-3 a flow modification message
(FlowMod-Add) to instruct the node device 20-3 to carry out a
process of outputting to the controller 10-1 the packet-in message
which incorporates the packet and the processing rule ID "A1", when
the interface unit 22-32 receives the packet in which the
destination address is "AD11" and the source address is "AD21".
[0548] (Registration of Processing Rule (Flow Entry) to Node Device
20-4)
[0549] The node device control section 11-2 in the controller 10-2
searches the sorting rules stored in the sorting rule storage
section 13-2 and outputs a flow modification message (FlowMod-Add)
to set the node device 20-4 with a processing rule (or a flow
entry). Here, the destination address "AD21" and the source address
"AD11" are specified as the identifying conditions (or an
identifying rule) of the processing rule (or the flow entry). Also,
the contents of processing (or the action) of the processing rule
(or the flow entry) are specified to perform an operation of
outputting an packet-in message incorporates the packet and the
processing rule ID "B1" to the controller 10-2 (or the output port
connected to the controller 10-2). That is, the node device control
section 11-2 outputs to the node device 20-4 a flow modification
message (FlowMod-Add) to instruct the node device 20-4 to carry out
a process of outputting to the controller 10-2 the packet-in
message which incorporates the packet and the processing rule ID
"B1", when the interface unit 22-23 receives the packet in which
the destination address is "AD21" and the source address is
"AD11".
[0550] (Packet Output Instruction to Node Device 20-3)
[0551] When receiving a packet from the virtual port 14-11, the
sorting section 12-1 in the controller 10-1, searches the sorting
rules stored in the sorting rule storage section 13-1, and if the
destination address of the packet is "AD21" and the source address
of the packet is "AD11", outputs a message of requesting the node
device control section 11-1 to output the packet from the interface
unit 22-32 in the node device 20-3.
[0552] When receiving the above-described message from the sorting
section 12-1, the node device control section 11-1 transmits a
packet-out message which incorporates the packet and a instruction
of outputting the packet to the interface unit 22-12, to the node
device 20-3 via the control channel.
[0553] (Packet Output Instruction to Node Device 20-4)
[0554] When receiving a packet from the virtual port 14-21, the
sorting section 12-2 in the controller 10-2, searches the sorting
rules stored in the sorting rule storage section 13-2, and if the
destination address of the packet is "AD11" and the source address
of the packet is "AD21", outputs a message of requesting the node
device control section 11-2 to output the packet from the interface
unit 22-41 in the node device 20-4.
[0555] (Operation for Transferring Packet to Server Section from
Client Section)
[0556] An exemplary operation for transferring a packet to the
server section 17 from the client section 16 is described
below.
[0557] The client section 16 in the controller 10-1 outputs a
packet in which the destination address is defined as "AD21" and
the source address is defined as "AD11", to the virtual port
14-11.
[0558] When receiving the packet from the client section 16, the
virtual port 14-11 outputs the packet to the sorting section
12-1.
[0559] When receiving the packet from the virtual port 14-11, the
sorting section 12-1 searches the sorting rules stored in the
sorting rule storage section 13-1 and outputs a message of
requesting the node device control section 11-1 to output the
packet from the interface unit 22-32 in the node device 20-3, since
the destination address of the packet is "AD21" and the source
address of the packet is "AD11".
[0560] When receiving the above-described message from the sorting
section 12-1, the node device control section 11-1 transmits a
packet-out message which incorporates the packet and an instruction
of outputting the packet to the interface unit 22-32, to the
communication unit 21-3 in the node device 20-3.
[0561] When receiving the packet-out message from the node device
control section 11-1 in the controller 10-1, the communication unit
21-3 in the node device 20-3 outputs the packet incorporated in the
packet-out message to the interface unit 22-32.
[0562] When receiving the packet from the communication unit 21-3,
the interface unit 22-32 transfers the packet to the interface unit
22-41 in the node device 20-4, which is the connection destination,
via the data communication link.
[0563] When receiving the packet from the interface unit 22-32 in
the node device 20-3, the interface unit 22-41 in the node device
20-4 outputs the packet to the communication unit 21-4.
[0564] When receiving the packet from the interface unit 22-41, the
communication unit 21-4 processes the inputted packet in accordance
with the processing rules (the flow entries).
[0565] In this embodiment, the communication unit 21-4 is set with
a processing rule (or a flow entry) by the node device control
section 11-2 in the controller 10-2, wherein the processing rule
instructs to, when receiving a packet in which the destination
address is "AD21" and the source address is "AD11" in the interface
unit 22-41, output a packet-in message which incorporates the
packet and the processing rule ID "B1" to the controller 10-2.
[0566] Accordingly, the communication unit 21-4 outputs the
packet-in message incorporating the packet, which complies with the
identifying conditions (or the identifying rule) of the processing
rule (or the flow entry), and the processing rule ID "B1", to the
node device control section 11-2 in the controller 10-2, via the
control channel.
[0567] When receiving the above-described packet-in message from
the communication unit 21-4 in the node device 20-4, the node
device control section 11-2 in the controller 10-2 outputs a
message which incorporates the above-described packet and the
processing rule ID "B1" to the sorting section 12-2.
[0568] When receiving the above message from the node device
control section 11-2, the sorting section 12-2 searches the sorting
rules stored in the sorting rule storage section 13-2, and
determines that the packet is to be outputted to the virtual port
14-21 on the basis of the processing rule ID "B1" included in the
above message, and outputs the packet incorporated in the above
message to the virtual port 14-21.
[0569] When receiving the packet from the sorting section 12-2, the
virtual port 14-21 outputs the packet to the server section 17,
which is the connection destination thereof.
[0570] The server section 17 processes the packet when receiving
the packet from the virtual port 14-21.
[0571] (Operation for Transferring Packet to Client Section from
Server Section)
[0572] Next, an exemplary operation of transferring a packet from
the server section 17 to the client section 16 is described
below.
[0573] The server section 17 in the controller 10-2 outputs a
packet in which the destination address is defined as "AD11" and
the source address is defined as "AD21" to the virtual port
14-21.
[0574] When receiving the packet from the server section 17, the
virtual port 14-21 outputs the packet to the sorting section
12-1.
[0575] When receiving the packet from the virtual port 14-21, the
sorting section 12-1 searches the sorting rules stored in the
sorting rule storage section 13-1. Since the destination address of
the packet is "AD11" and the source address of the packet is
"AD21", the sorting section 12-1 outputs a message of requesting
the node device control section 11-2 to output the packet from the
interface unit 22-41 in the node device 20-4.
[0576] When receiving the above message from the sorting section
12-2, the node device control section 11-2 transmits a packet-out
message which incorporates the packet and an instruction of
outputting the packet to the interface unit 22-41, to the
communication unit 21-4 in the node device 20-4 via the control
channel.
[0577] When receiving the packet-out message from the node device
control section 11-2 in the controller 10-2, the communication unit
21-4 in the node device 20-4 outputs the packet incorporated in the
packet-out message to the interface unit 22-41.
[0578] When receiving the packet from the communication unit 21-4,
the interface unit 22-41 transfers the packet to the interface unit
22-32 in the node device 20-3, which is the connection destination,
through the data communication link.
[0579] When receiving the packet from the interface unit 22-41 in
the node device 20-4, the interface unit 22-32 in the node device
20-3 outputs the packet to the communication unit 21-3.
[0580] When receiving the packet from the interface unit 22-32, the
communication unit 21-3 processes the received packet on the basis
of the processing rules (or the flow entries).
[0581] Here, the communication unit 21-3 is set with a processing
rule (or a flow entry) from the node device control section 11-1 in
the controller 10-1, wherein the processing rule instructs to, when
receiving a packet in which the destination address is defined as
"AD11" and the source address is defined as "AD21" in the interface
unit 22-32, output a packet-in message which incorporates the
packet and the processing rule ID "A1" to the controller 10-1.
[0582] As a result, the communication unit 21-3 outputs the
packet-in message which incorporates the packet complying with the
identifying conditions (or the identifying rule) of the processing
rule (or the flow entry) and the processing rule ID "A1", to the
node device control section 11-1 in the controller 10-1 via the
control channel.
[0583] When receiving the above packet-in message from the
communication unit 21-3 in the node device 20-3, the node device
control section 11-1 in the controller 10-1 outputs a message which
incorporates the above packet and the processing rule ID "A1" to
the sorting section 12-1.
[0584] When receiving the above message from the node device
control section 11-1, the sorting section 12-1 searches the sorting
rules stored in the sorting rule storage section 13-1, and
determines that the packet is to be outputted to the virtual port
14-11 on the basis of the processing rule ID "A1" incorporated in
the above message, and then outputs the packet incorporated in the
above message to the virtual port 14-11.
[0585] When receiving the packet from the sorting section 12-1, the
virtual port 14-11 outputs the packet to the client section 16,
which is the connection destination.
[0586] The client section 16 processes the packet when receiving
the packet from the virtual port 14-11.
[0587] (Communications Via Data Communication Link)
[0588] In the fourth embodiment, communications between the
controllers 10 can be carried out by packet transfer via a data
communication link, similarly to the first embodiment.
[0589] (1) Setting of Processing Rules (Flow Entries) to Each Node
Device 20
[0590] Initially, the node device control section 11 in each of the
controllers 10-1 and 10-2 calculates a route to connect the end
points in each network and sets each node device 20 on the route
with processing rules (or flow entries) for the packet
transfer.
[0591] For example, the node device control section 11-1 in the
controller 10-1 calculates a route to connect the interface unit
22-11 in the node device 20-1 with the interface unit 22-32 in the
node device 20-3 in the network 1 and sets each node device 20 on
the route with processing rules (flow entries) for the packet
transfer.
[0592] Similarly, the node device control section 11-2 in the
controller 10-2 calculates a route to connect the interface unit
22-41 in the node device 20-4 to the interface unit 22-62 in the
node device 20-6 in the network 2 and sets each node device 20 on
the route with processing rules (flow entries) for the packet
transfer.
[0593] (2) Transmission and Reception of Packet through Link for
Data Communication
[0594] The sorting section 12 in each of the controller 10-1 and
the controller 10-2 then outputs a packet which is to be
transferred through the data communication link to the virtual port
14 connected to the interface management section 15. Consequently,
the packet is outputted from the interface management section 15.
At this time, the sorting section 12 may refer to the sorting rule
stored in the sorting rule storage section 13 and determine the
virtual port 14 of the output destination.
[0595] For example, the sorting section 12-1 in the controller 10-1
outputs a packet in which the destination address is "AD22" and the
source address is "AD12" to the virtual port 14-12. As a result the
packet is outputted from the interface management section 15-1. In
this case, the client section 16 in the controller 10-1 may output
the packet in which the destination address is "AD22" and the
transmission source address is AD12'' via the virtual port 14-11 to
the sorting section 12-1.
[0596] Similarly, the sorting section 12-2 in the controller 10-1
outputs a packet in which the destination address is "AD12" and the
source address is "AD22" to the virtual port 14-22. As a result,
the packet is outputted from the interface management section 15-2.
In this case, the server section 17 in the controller 10-1 may
output the packet in which the destination address is "AD12" and
the source address is "AD22" via the virtual port 14-21 to the
sorting section 12-2.
[0597] The communication unit 21 in each node device 20 transfers
packets in accordance with the processing rules (or the flow
entry).
[0598] The interface management section 15 in each of the
controller 10-1 and the controller 10-2 outputs packets which are
received from a node device 20 via a data communication link to the
sorting section 12 via a virtual port 14.
[0599] For example, the interface management section 15-1 in the
controller 10-1 outputs a packet received from the node device 20-1
to the sorting section 12-1 via the data communication link through
the virtual port 14-12. In this case, the sorting section 12-1 in
the controller 10-1 may output the packet received from the virtual
port 14-12 to the client section 16, through the virtual port
14-11.
[0600] Similarly, the interface management section 15-2 in the
controller 10-2 outputs a packet received from the node device 20-6
to the sorting section 12-2 via the data communication link through
the virtual port 14-22. In this case, the sorting section 12-2 in
the controller 10-2 may output the packet received from the virtual
port 14-22 to the server section 17 through the virtual port
14-21.
[0601] (Supplement)
[0602] The sorting section 12 in each controller 10 may perform
packet transfer between the virtual ports 14 in the controller 10
without referring to the sorting rules stored in the sorting rule
storage section 13.
[0603] It should be noted that the sorting rule storage section 13
may store the sorting rules of packets exchanged among the virtual
ports 14 in the controller 10.
[0604] When one of the controllers is used as a terminal device
connected to the node device, this embodiment allows communications
that uses a conventional communication method, such as TCP/IP and
the like, between the terminal apparatus and the controller.
Advantage of Communications Systems Disclosed in Above-Described
Embodiments
[0605] Conventionally, there is a difficulty in achieving
communications among controllers by using a conventional
communication method such as TCP/IP in a CD-separated type network,
such as OpenFlow networks. The use of the systems disclosed in
these embodiments of the present invention enables communications
based on a conventional communication method such as TCP/IP between
or among a plurality of controllers.
[0606] This allows achieving a distributed control of the entire
system by a plurality of controllers by reusing distributed-control
applications based on a conventional communication architecture
such as TCP/IP, making it easy to establish a large-scale
system.
[0607] It should the above-mentioned respective embodiments can be
carried out by combining them.
Examples of Hardware Configuration
[0608] Examples of hardware devices which may be used in the
communications system according the present invention is described
below.
[0609] Examples of devices which may be used as the controllers
include a computer such as PC (personal computer), an appliance, a
thin client server, a workstation, a main frame, a super computer.
Note that the controllers may be a relaying device or a peripheral
device, not limited to a terminal device or a server. Also, an
expansion board installed in a computer or the like may be used as
the controller, or a virtual machine (VM) established on a physical
machine may be used as the controller.
[0610] Examples of devices which may be used as the node devices
include a network switch, a router, a proxy, a gateway, a firewall,
a load balancer, a packet shaper, a SCADA (supervisory control and
data acquisition) security monitoring controller, a gatekeeper, a
base station, an access point (AP), a communication satellite (CS),
and a computer having a plurality of communication ports. Also, a
virtual switch operating on a virtual machine (VM) established on a
physical machine may be used as the node device. The controllers
and the node devices may be installed on movable bodies such as
vehicles, ships and airplanes.
[0611] In one example, as shown in FIG. 12, each controller 10 may
each include a storage device (or memory) 31, a processor 32, and
an interface 33. The storage device 31 stores a software program
31a which includes codes describing the above-described operations
of the controller 10. The storage device 31 is also used to store
various data used and generated in the operations of the controller
10. The processor 32 executes the software program 31a to perform
the above-described operations of the controller 10. The interface
33 is used to communicate with the node devices 20. A
non-transitory recording medium 50 may be used to install the
software program 31a onto the storage device 31.
[0612] Similarly, as shown in FIG. 13, each node device 20 may each
include a storage device (or memory) 41, a processor 42, and
interfaces 43 and 44. The storage device 41 stores a software
program 41a which includes codes describing the above-described
operations of the node device 20. The storage device 41 is also
used to store various data used and generated in the operations of
the node device 20. The processor 32 executes the software program
41a to perform the above-described operations of the node device
10. The interface 43 is used to communicate with another node
device 20, and interface 44 is used to communicate with a
controller 10. A non-transitory recording medium 60 may be used to
install the software program 41a onto the storage device 41.
[0613] Examples of the processors 32 and 42 include a CPU (central
processing unit), a network processor (NP), a microprocessor, a
microcontroller, and a large scale integrated circuit (LSI) having
a dedicated function and the like.
[0614] Examples of the storage devices (or memories) 31 and 41
include a semiconductor storage device such as a RAM (Random Access
Memory), a ROM (Read Only Memory), an EEPROM (Electrically Erasable
and Programmable Read Only Memory), a flash memory, an auxiliary
storage device such as an HDD (Hard Disk Drive), an SSD (Solid
State Drive), a removable disk such as a DVD (Digital Versatile
Disk), a storage medium such as an SD memory card (Secure Digital
memory card), and the like. Also, a buffer, a register and the like
may be used as the storage devices (or memories) 31 and 41. In one
embodiment, a storage device that uses a DAS (direct attached
storage), an FC-SAN (fiber channel-storage area network), an NAS
(network attached storage), IP-SAN (IP-storage area network) and
the like may be used the storage devices (or memories) 31 and
41.
[0615] The processor 31 and the storage device 32 may be
monolithically integrated and the processor 41 and the storage
device 42 may be monolithically integrated. In recent years,
one-chip microcomputers have been made popular. In one embodiment,
a one-chip microcomputer installed in an electronic appliance or
the like may monolithically integrate the above-described processor
and storage device.
[0616] Examples of the above-described interface includes a circuit
board (a mother board, an I/O board) and a semiconductor integrated
circuit which are adapted to a network communication, a network
adaptor such as an NIC (network interface card), a similar
expansion card, a communication apparatus such as an antenna, and a
communication port such as a connection port (connector).
[0617] Also, examples of the network include the Internet, an LAN
(local area network), a wireless LAN, a WAN (wide area network), a
backbone, a cable television (CATV) line, a fixed telephone
network, a mobile telephone network, WiMAX (IEEE 802.16a), a 3G
(3rd generation) communication system, a dedicated line (lease
line), an IrDA (infrared data association), Bluetooth (Registered
Trademark), a serial communication line, a data bus and the
like.
[0618] Configuration elements included in each of the controllers
and the node devices may be modules and components, or dedicated
devices, or starting (calling) programs for them.
[0619] It should be noted that actual implementations are not
limited to these examples.
Summary
[0620] As discussed above, the communications system in exemplary
embodiments of the present invention includes controllers and node
devices. The controllers control packet processing in the
respective node devices.
[0621] The node devices outputs packets from their own physical or
logical interfaces under control of the controllers.
[0622] Each controller each obtains identifying conditions
(identifying rules) of packets used to communicate with a network
that is not under control of the controller (that is, a network
provided outside its own network).
[0623] Also, each controller calculates the transfer route which
connects an interface of one node device and another interface of a
boundary node device located on the boundary with a different
network, which is used for establishing a connection to a device
provided outside its own network.
[0624] In one embodiment, the controllers may each calculate a
transfer route in which a start point is defined as an interface of
a node device connected to the controller via a data communication
link and an end point is defined as an interface of a boundary node
device which is used for establishing a connection to a device
provided outside its own network. Alternatively, the controllers
may each calculates a transfer route in which a start point is
defined as an interface of a boundary node device which is used for
establishing a connection to a device provided outside its own
network and an end point is defined as an interface of a node
device connected to the controller via a data communication
link.
[0625] Also, the controllers each sets the node devices with
processing rules (or flow entries) so that packets complying with
the identifying conditions (or the identifying rules) are
transferred on the calculated transfer route.
Supplementary Note
[0626] Some or all of the above-mentioned embodiments may be
represented as described in the following supplementary notes. It
should be noted that actual implementations are not limited to the
following supplementary nodes.
[0627] (Supplementary Note 1)
[0628] A communications system in which controllers control packet
processing in each of node devices, and the node devices each
output packets any interfaces thereof under control of a controller
connected thereto,
[0629] wherein each of the controllers includes:
[0630] a node communication section which sets a control channel to
control each of the node devices and transmits and receives control
messages;
[0631] a network interface connected to one of node devices via a
data communication link;
[0632] an adjacency discovery section which discovers a boundary
node device from the node devices, the boundary node being located
on the boundary with a different network that is controlled by a
different controller;
[0633] an identifying condition calculating section calculating
identifying conditions (or identifying rules) of packets used to
communicate with a controller in the network adjacent thereto;
[0634] a route calculating section that calculates a transfer route
having a start point determined as a node device connected to an
interface of the controller, through which a packet is outputted to
an interface of the boundary node device, the interface being
connected to a different network outside its own network, and a
transfer route having a start point determined as the boundary node
device, through which a packet is outputted to an interface of the
node device connected to the interface of the controller; and
[0635] a processing rule calculating section that sets the node
devices connected to the each controller with processing rules
(flow entries) so as to transfer packets complying with the
identifying conditions (the identifying rules) on the transfer
route.
[0636] (Supplementary Note 2)
[0637] The communications system set forth in supplementary note 1,
wherein the adjacency discovery section embeds unique
identification information of each controller into retrieval
packets used to retrieve a connection relation among the node
devices inside each network.
[0638] (Supplementary Note 3]
[0639] The communications system set forth in the supplementary
note 1, wherein the controller instructs the boundary node device
to output a packet which incorporates identifying conditions
(identifying rules) from the interface connected to the different
network.
[0640] (Supplementary Note 4)
[0641] The communications system described in the supplementary
note 1, wherein the identifying condition calculating section
incorporates identifying conditions used in the communication
between the controllers into a retrieval packet used to retrieve a
connection relation among the node devices inside each network.
[0642] (Supplementary Note 5]
[0643] The communications system set forth in any one of
supplementary notes 1 to 4, wherein the route calculation section
refers to identification information incorporated in the retrieval
packet transmitted to the controller through an interface by a node
device, and, if it is equal to identification information
indicative of its own controller, determines the interface of the
node device connected to the controller as an end point in the
route calculation.
[0644] (Supplementary Note 6)
[0645] The communications system set forth in any one of
supplementary notes 1 to 5, wherein the network interface is
physically connected through a network link connection cable.
[0646] (Supplementary Note 7)
[0647] The communications system set forth in any one of
supplementary notes 1 to 5, wherein the controller contains one or
more virtual ports for transmitting and receiving packets, a
sorting rule storage section storing one or more sorting rules of
packets and a sorting section for specifying a sorting destination
of packets,
[0648] wherein the sorting rule storage section retrieves and
return selected one of the sorting rules in response to a reference
request, and
[0649] wherein the sorting section specifies a transfer destination
of packets transmitted and received between the interface of the
boundary node device and the virtual ports, in accordance with the
sorting rule selected by referring to the sorting rule storage
section.
[0650] (Supplementary Note 8)
[0651] The communications system set forth in any one of
supplementary notes 1 to 5,
[0652] wherein the communications system includes a plurality of
controllers, an interface section in a node device controlled by
one of the controllers is connected via a communication line to an
interface section in a node device controlled by a different one of
the controllers,
[0653] wherein each of the controllers contains an wide area
control section that communicates with a different one of the
controllers,
[0654] wherein the wide area control sections are each connected to
one or more of the virtual ports, and
[0655] wherein the wide area control sections communicate with each
other through the virtual ports.
[0656] (Supplementary Note 9)
[0657] A communicating method in which controllers control packet
processing in each of node devices, and the node devices each
output packets any interfaces thereof under control of a controller
connected thereto,
[0658] wherein each of the controllers is connected via a network
interface which achieves a data transfer link connection to one or
more of the node devices, and
[0659] wherein the communication method includes:
[0660] discovering a boundary node device from the node devices,
located on a boundary with a different network that is controlled
by a different controller;
[0661] calculating identifying conditions (or identifying rules) of
packets used to communicate with the different controller in the
different network;
[0662] calculating a transfer route having a start point determined
as a node device connected to an interface of the controller,
through which a packet is outputted to an interface of the boundary
node device, the interface being connected to a different network
outside its own network, and a transfer route having a start point
determined as the boundary node device, through which a packet is
outputted to an interface of the node device connected to the
interface of the controller; and
[0663] setting the node devices connected to the each controller
with processing rules (flow entries) so as to transfer packets
complying with the identifying conditions (the identifying rules)
on the transfer route, and
[0664] processing a packet complying with the identifying
conditions (of the identifying rules) of a processing rule in
accordance with the processing rule (flow entry).
[0665] (Supplementary Note 10)
[0666] The communicating method system described in supplementary
note 9, wherein each controller embeds unique identification
information of the each controller into retrieval packets used to
retrieve a connection relation among the node devices inside each
network,
[0667] wherein the controller refers to identification information
incorporated in the retrieval packet, and compares the
identification information incorporated in the packet with its own
identification information and consequently determines whether the
retrieval packet comes from a different controller.
[0668] (Supplementary Note 11)
[0669] The communicating method set forth in supplementary note 9,
wherein the controller instructs the boundary node device to output
a packet incorporating identifying conditions (or the identifying
rules) from the interface connected to the different network.
[0670] (Supplementary Note 12)
[0671] The communicating method set forth in supplementary note 9,
wherein the controller incorporates packet judgment conditions used
in the communication between the controllers into a retrieval
packet used to retrieve the connection relation between the node
devices in its own network.
[0672] (Supplementary Note 13)
[0673] The communicating method set forth in any one of
supplementary notes 9 to 12, wherein each controller refers to
identification information incorporated in the retrieval packet
transmitted to the controller through the interface from the node
device, and if the identification information is equal to its own
identification information, determines the interface of the node
device connected to the controller as an end point in the route
calculation.
[0674] (Supplementary Note 14)
[0675] The communicating method described in one of the
supplementary notes 9 to 13, wherein the controller is physically
connected through a cable for a network link connection to the node
device.
[0676] (Supplementary Note 15)
[0677] A controller for controlling packet processing of node
devices, including:
[0678] a network interface for establishing a connection to one of
the node devices via a data transfer link;
[0679] an adjacency discovery section for discovering a boundary
node device from the node devices, the boundary node device being
located on the boundary with a different network that is controlled
by a different controller;
[0680] an identifying condition calculating section calculating
identifying conditions (or identifying rules) of packets used to
communicate with a controller in the network adjacent thereto;
[0681] a route calculating section that calculates a transfer route
having a start point determined as a node device connected to an
interface of the controller, through which a packet is outputted to
an interface of the boundary node device, the interface being
connected to a different network outside its own network, and a
transfer route having a start point determined as the boundary node
device, through which a packet is outputted to an interface of the
node device connected to the interface of the controller; and
[0682] a processing rule calculating section that sets the node
devices connected to the each controller with processing rules
(flow entries) so as to transfer packets complying with the
identifying conditions (the identifying rules) on the transfer
route.
[0683] (Supplementary Note 16)
[0684] The controller described in the supplementary note 15,
wherein the adjacency discovery section embeds unique
identification information of each controller into retrieval
packets used to retrieve a connection relation among the node
devices inside each network.
[0685] (Supplementary Note 17)
[0686] The controller described in the supplementary note 15,
wherein the identifying condition calculating section of the
controller instructs the boundary node device to output a packet
which incorporates identifying conditions (identifying rules) from
the interface connected to the different network.
[0687] (Supplementary Note 18)
[0688] The controller described in the supplementary note 15,
wherein the identifying condition calculating section incorporates
identifying conditions used in the communication between the
controllers into a retrieval packet used to retrieve a connection
relation among the node devices inside each network.
[0689] (Supplementary Note 19)
[0690] The controller described in one of the supplementary notes
15 to 18, wherein the route calculation section refers to
identification information incorporated in the retrieval packet
transmitted to the controller through an interface by a node
device, and, if it is equal to identification information
indicative of its own controller, determines the interface of the
node device connected to the controller as an end point in the
route calculation.
[0691] (Supplementary Note 20)
[0692] The controller described in one of the supplementary notes
15 to 20, wherein the network interface is physically connected
through a network link connection cable.
[0693] It should be noted that an information processing apparatus
may be used as the above-described controller. Also, a
communicating apparatus may be used as the above-described node
device.
REMARK
[0694] While the invention has been particularly shown and
described with reference to exemplary embodiments thereof, the
invention is not limited to these examples. It will be understood
by those skilled in the art that various changes in form and
details any be made therein without departing from the spirit and
scope of the present invention as defined by the claims.
[0695] This application is based upon claims the benefit of
priority from Japanese patent application No. 2012-068286, filed on
Mar. 23, 2012, the disclosure of which is incorporated herein in
its entirety by reference.
* * * * *
References