U.S. patent application number 12/582271 was filed with the patent office on 2011-04-21 for system and method for reconfigurable network services in dynamic virtualization environments.
This patent application is currently assigned to DELL PRODUCTS, LP. Invention is credited to Gaurav Chawla, Jacob Cherian, Hendrich M. Hernandez, Saikrishna Kotha, Robert L. Winter.
Application Number | 20110093849 12/582271 |
Document ID | / |
Family ID | 43880263 |
Filed Date | 2011-04-21 |
United States Patent
Application |
20110093849 |
Kind Code |
A1 |
Chawla; Gaurav ; et
al. |
April 21, 2011 |
System and Method for Reconfigurable Network Services in Dynamic
Virtualization Environments
Abstract
A method includes configuring a host system to instantiate a
virtual machine using server configuration information from a
virtual machine monitor (VMM) and configuring a switch network to
provide the virtual machine with access to resources on the switch
network using network configuration information from the VMM. A VMM
includes a workload with a server configuration module that
configures a host system to include a virtual machine, and a
network configuration module that configures a switch network
coupled to the host system, such that the virtual machine obtains
access to resources on the switch network.
Inventors: |
Chawla; Gaurav; (Austin,
TX) ; Hernandez; Hendrich M.; (Round Rock, TX)
; Cherian; Jacob; (Austin, TX) ; Winter; Robert
L.; (Leander, TX) ; Kotha; Saikrishna;
(Austin, TX) |
Assignee: |
DELL PRODUCTS, LP
Round Rock
TX
|
Family ID: |
43880263 |
Appl. No.: |
12/582271 |
Filed: |
October 20, 2009 |
Current U.S.
Class: |
718/1 ; 709/222;
709/226 |
Current CPC
Class: |
G06F 2009/45595
20130101; H04L 41/0816 20130101; G06F 9/45558 20130101; G06F
2009/4557 20130101; G06F 8/60 20130101; G06F 8/61 20130101 |
Class at
Publication: |
718/1 ; 709/222;
709/226 |
International
Class: |
G06F 9/455 20060101
G06F009/455 |
Claims
1. A method comprising: configuring a first host system to
instantiate a first virtual machine using server configuration
information obtained from a virtual machine monitor (VMM); and
configuring a switch network to provide the first virtual machine
with access to resources on the switch network using network
configuration information obtained from the VMM.
2. The method of claim 1, wherein, in configuring the switch
network, the method further comprises: before configuring the
switch network, sending the network configuration information from
the VMM to the first host system; and directing the first
particular host system to configure the switch network using the
network configuration information.
3. The method of claim 2, wherein, in sending the network
configuration to the first host system, the method further
comprises: organizing at the VMM the network configuration
information as an Ethernet frame request; sending from the VMM the
Ethernet frame request to the first host system; and directing the
first host system to send the Ethernet frame request to the switch
network.
4. The method of claim 3, wherein the method further comprising
receiving at the VMM an Ethernet frame reply from the switch
network in response to sending the Ethernet frame request.
5. The method of claim 1, further comprising: configuring a second
host system to instantiate a second virtual machine using the
server configuration information; and configuring the switch
network to provide the second virtual machine with access to the
resources on the switch network using the network configuration
information.
6. The method of claim 5, further comprising deleting the first
virtual machine from the first host system.
7. Machine-executable code for an information handling system
comprising a first resource, wherein the machine-executable code is
embedded within a tangible medium and includes instructions for
carrying out a method comprising: configuring a first host system
to instantiate a first virtual machine using server configuration
information obtained from a virtual machine monitor (VMM); and
configuring a switch network to provide the first virtual machine
with access to resources on the switch network using network
configuration information obtained from the VMM.
8. The machine-executable code of claim 7, wherein, in configuring
the switch network, the method further comprises: before
configuring the switch network, sending the network configuration
information from the VMM to the first host system; and directing
the first particular host system to configure the switch network
using the network configuration information.
9. The machine-executable code of claim 8, wherein, in sending the
network configuration to the first host system, the method further
comprises: organizing at the VMM the network configuration
information as an Ethernet frame request; sending from the VMM the
Ethernet frame request to the first host system; and directing the
first host system to send the Ethernet frame request to the switch
network.
10. The machine-executable code of claim 8, the method further
comprising: configuring a second host system to instantiate a
second virtual machine using the server configuration information;
and configuring the switch network to provide the second virtual
machine with access to the resources on the switch network using
the network configuration information.
11. The machine-executable code of claim 10, the method further
comprising deleting the first virtual machine from the first host
system.
12. A virtual machine monitor (VMM) comprising a workload, the
workload including: a server configuration module operable to
configure a host system to include a virtual machine; and a network
configuration module operable to configure a switch network coupled
to the host system, such that the virtual machine obtains access to
resources on the switch network.
13. The VMM of claim 12, wherein the VMM is operable to instantiate
the workload on a first particular host system, wherein, in
instantiating the workload: the server configuration module is
operable to configure the first particular host system to include a
first particular virtual machine; and the network configuration
module is operable to configure the switch network to provide the
first particular virtual machine with access to the resources on
the switch network.
14. The VMM of claim 13, wherein, in configuring the switch
network, the network configuration module is further operable to:
send network configuration metadata to the first particular host
system; and direct the first particular host system to send the
network configuration metadata to the switch network.
15. The VMM of claim 14, wherein, in sending the network
configuration metadata to the first particular host system, the
network configuration module is further operable to: organize the
network configuration metadata as an Ethernet frame request; send
the Ethernet frame request to the first particular host system; and
direct the first particular host system to send the Ethernet frame
request to the switch network.
16. The VMM of claim 15, wherein the VMM is further operable to
receive an Ethernet frame reply from the switch network in response
to sending the Ethernet frame request.
17. The VMM of claim 13, wherein the VMM is further operable to
migrate the workload to a second particular host system, wherein,
in migrating the workload: the server configuration module is
operable to configure the second particular host system to
instantiate a second particular virtual machine; and the network
configuration module is operable to configure the switch network to
provide the second particular virtual machine with access to the
resources on the switch network.
18. The VMM of claim 17, wherein, in migrating the workload, the
VMM is further operable to delete the first particular virtual
machine from the first particular host system.
19. The VMM of claim 12, wherein the VMM is included in the host
system.
20. The VMM of claim 13, wherein the VMM is operable to modify the
workload on the first particular host system, wherein, in modifying
the workload, the VMM is operable to: modify network configuration
metadata associated with the network configuration module; and
configure the switch network using the modified network
configuration metadata.
Description
FIELD OF THE DISCLOSURE
[0001] This disclosure relates generally to information handling
systems, and relates more particularly to network switching in an
information handling system.
BACKGROUND
[0002] As the value and use of information continues to increase,
individuals and businesses seek additional ways to process and
store information. One option is an information handling system. An
information handling system generally processes, compiles, stores,
or communicates information or data for business, personal, or
other purposes. Because technology and information handling needs
and requirements can vary between different applications,
information handling systems can also vary regarding what
information is handled, how the information is handled, how much
information is processed, stored, or communicated, and how quickly
and efficiently the information can be processed, stored, or
communicated. The variations in information handling systems allow
information handling systems to be general or configured for a
specific user or specific use such as financial transaction
processing, airline reservations, enterprise data storage, or
global communications. In addition, information handling systems
can include a variety of hardware and software resources that can
be configured to process, store, and communicate information and
can include one or more computer systems, data storage systems, and
networking systems. An information handling system can include
virtual machines that run operating systems and applications on a
common host system.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] It will be appreciated that for simplicity and clarity of
illustration, elements illustrated in the Figures have not
necessarily been drawn to scale. For example, the dimensions of
some of the elements are exaggerated relative to other elements.
Embodiments incorporating teachings of the present disclosure are
illustrated and described with respect to the drawings presented
herein, in which:
[0004] FIG. 1 is a block diagram of a network system according to
an embodiment of the present disclosure;
[0005] FIG. 2 is a schematic representation of an Ethernet frame
for performing a network configuration transaction according to an
embodiment of the present disclosure;
[0006] FIG. 3 is a flowchart illustrating a method for
reconfiguring network services in dynamic virtualization
environments according to embodiment of the present disclosure;
and
[0007] FIG. 4 is a functional block diagram illustrating an
exemplary embodiment of an information handling system.
[0008] The use of the same reference symbols in different drawings
indicates similar or identical items.
DETAILED DESCRIPTION OF DRAWINGS
[0009] The following description in combination with the Figures is
provided to assist in understanding the teachings disclosed herein.
The following discussion will focus on specific implementations and
embodiments of the teachings. This focus is provided to assist in
describing the teachings, and should not be interpreted as a
limitation on the scope or applicability of the teachings. However,
other teachings can be used in this application. The teachings can
also be used in other applications, and with several different
types of architectures, such as distributed computing
architectures, client/server architectures, or middleware server
architectures and associated resources.
[0010] For purposes of this disclosure, an information handling
system can include any instrumentality or aggregate of
instrumentalities operable to compute, classify, process, transmit,
receive, retrieve, originate, switch, store, display, manifest,
detect, record, reproduce, handle, or use any form of information,
intelligence, or data for business, scientific, control,
entertainment, or other purposes. For example, an information
handling system can be a personal computer, a PDA, a consumer
electronic device, a network server or storage device, a switch
router, wireless router, or other network communication device, or
any other suitable device and can vary in size, shape, performance,
functionality, and price. The information handling system can
include memory (volatile such as random-access memory), nonvolatile
such as read-only memory or flash memory) or any combination
thereof), one or more processing resources, such as a central
processing unit (CPU), a graphics processing unit (GPU), hardware
or software control logic, or any combination thereof. Additional
components of the information handling system can include one or
more storage devices, one or more communications ports for
communicating with external devices, as well as various input and
output (I/O) devices such as a keyboard, a mouse, a video/graphic
display, or any combination thereof. The information handling
system can also include one or more buses operable to transmit
communications between the various hardware components. Portions of
an information handling system may themselves be considered
information handling systems.
[0011] Portions of an information handling system, when referred to
as a "device," a "module," or the like, can be configured as
hardware, software (which can include firmware), or any combination
thereof. For example, a portion of an information handling system
device may be hardware such as, for example, an integrated circuit
(such as an Application Specific Integrated Circuit (ASIC), a Field
Programmable Gate Array (FPGA), a structured ASIC, or a device
embedded on a larger chip), a card (such as a Peripheral Component
Interface (PCI) card, a PCI-express card, a Personal Computer
Memory Card International Association (PCMCIA) card, or other such
expansion card), or a system (such as a motherboard, a
system-on-a-chip (SoC), or a stand-alone device). Similarly, the
device could be software, including firmware embedded at a device,
such as a Pentium class or PowerPC.TM. brand processor, or other
such device, or software capable of operating a relevant
environment of the information handling system. The device could
also be a combination of any of the foregoing examples of hardware
or software. Note that an information handling system can include
an integrated circuit or a board-level product having portions
thereof that can also be any combination of hardware and
software.
[0012] Devices or programs that are in communication with one
another need not be in continuous communication with each other
unless expressly specified otherwise. In addition, devices or
programs that are in communication with one another may communicate
directly or indirectly through one or more intermediaries.
[0013] FIG. 1 illustrates a network system 100 according to an
embodiment of the present disclosure, including a virtual machine
monitor (VMM) 110, a host processing system 120, one or more
additional host processing systems 130, a switch network 140, and a
storage device 150. Host processing system 120 is connected to
switch network 140 by port 161, host processing system 130 is
connected to switch network 140 by port 162, and switch network 140
is connected to storage device 150 by port 163. VMM 110 enables
hardware and software virtualization on host processing systems 120
and 130, such that one or more virtual machines (VM) 121 and 131
can be instantiated on host processing systems 120 and 130. VMM 110
permits each of host processing systems 120 and 130 to run multiple
operating systems (OSs), including different OSs, multiple
instances of the same OS, or a combination of multiple OSs and
multiple instances of a particular OS. VMM 110 creates and deletes
VMs 121 and 131, and migrates VMs 121 and 131 between host
processing systems 120 and 130. VMs 121 and 131 can be created,
deleted, or migrated in order to better utilize the processing
resources of host processing systems 120 and 130, to balance
workloads between host processing systems 120 and 130, or to
provide a more fault tolerant and reliable processing environment.
In the illustrated embodiment, VMM 110 is implemented as a function
in a separate information handling system that is coupled to host
processing systems 120 and 130. In another embodiment (not
illustrated), VMM 110 is implemented on host processing systems 120
and 130 as separate virtual machine monitors that function to
communicate with each other to manage the creation, deletion, and
migration of virtual machines between host processing systems 120
and 130.
[0014] Switch network 140 represents one or more switch elements
(not illustrated) that function to route data communications
between host processing systems 120 and 130 and storage device 150.
The switch elements are associated with one or more network
switching fabrics. For example, switch elements within switch
network 140 can be Fibre Channel switch elements, Internet Small
Computer System Interface (iSCSI) switch elements, switch elements
according to another network switching fabric, or a combination
thereof. Ports 161, 162, and 163 each represent one or more data
communication links between the elements connected thereto, and
that supply a bandwidth capacity for communicating data between the
components connected thereto. A portion of the bandwidth capacity
of ports 161 and 163 can be allocated to VM 121, and a portion of
the bandwidth capacity of ports 162 and 163 can be allocated to VM
131. The allocation of bandwidth capacity is done by configuring
the switch elements in switch network 140 to provide a contiguous
data path between VM 121 and storage device 150, and by configuring
the switch elements in switch network 140 to provide a contiguous
data path between VM 131 and storage device 150. In this way, VMs
121 and 131 obtain data communication access to storage device 150.
In addition to configuring the switch elements in switch network
140 to allocate a portion of their bandwidth to VMs 121 and 131,
the switch elements can be configured with associated virtual local
area network (VLAN) identifiers, priority or quality-of-service
policies, access control list (ACL) rules, mirroring policies, or
other configuration information used by the network elements to
establish route mappings between VMs 121 and 131 and storage device
150.
[0015] Storage device 150 represents one or more storage elements
(not illustrated) that are available to host processing systems 120
and 130, and that are accessible through switch network 140. For
example, storage device 150 can include one or more storage area
networks (SANs). Storage device 150 supplies a storage capacity for
storing data. A portion of the storage capacity can be allocated to
VM 121, and a portion of the storage capacity can be allocated to
VM 131. The allocation of storage capacity is done by partitioning
the storage elements in storage device 150 and logging-on from VM
121 to one or more of the partitions, and by logging-on from VM 131
to the same partitions. In this way, VMs 121 and 131 obtain storage
capacity from storage device 150. In a particular embodiment (not
illustrated), storage device 150 can also represent connectivity to
other information handling systems, other devices or resources, or
a combination thereof.
[0016] A workload is an amount of computing power needed to perform
a computing task. A workload can be specified in terms of a
software application and the associated operating system that
performs the computing task, a number of processors in a host
processing system needed to perform the computing task, a bandwidth
capacity and storage capacity sufficient to keep the host
processing system operating efficiently, and other operational
parameters that describe the environment needed to perform the
computing task. When a particular computing task is identified, a
network administrator (not illustrated), creates a workload 111 on
VMM 110. VMM 110 functions to set up network system 100 to
implement workload 111 by allocating computing resources, network
switching, and storage capacities to perform the computing task. In
a particular embodiment, one or more additional workloads (not
illustrated) are implemented on VMM 110.
[0017] As such, workload 111 includes a server configuration
metadata module 112, a virtual machine image module 114, and a
network configuration metadata module 116. Server configuration
metadata module 112 includes server configuration metadata that
relates the requirements of the computing task to the hardware
capabilities of host processing systems 120 and 130, and can
include allocations of computing power and network bandwidth of
host processing system 120 or 130 and other hardware requirements
necessary for the performance of the computing task. Virtual
machine image module 114 includes virtual machine image information
such as the software application and associated operating system
that performs the computing task. Virtual machine image module 114
can implement the virtual machine image information using a
procedure whereby the operating system and software application are
installed in a virtual machine, using a memory image of the
operating system and software application that is loaded to memory
associated with the virtual machine, or using another mechanism
whereby the virtual machine is instantiated with the operating
system and software application. Network configuration metadata
module 116 includes network configuration metadata that relates the
requirements of the computing task to the parameters of switch
network 140, and can include profiles for the switch elements that
configure switch network 140 to provide a contiguous data path
between a virtual machine and the associated storage elements. The
profiles can specify bandwidth capacity, VLAN identifiers, priority
or quality-of-service policies, ACL rules, mirroring policies, and
any other configuration information used by the network elements to
establish route mappings between a virtual machine and the
associated storage elements. In a particular embodiment, the server
configuration metadata and the virtual machine image information
are implemented in accordance with the Open Virtualization Format
(OVF) version 1.0.0, dated Feb. 22, 2009.
[0018] When workload 111 is fully configured and ready for
implementation, VMM 110 instantiates workload 111 onto host
processing system 120 or 130. In the illustrated example, VM 121 is
instantiated onto host processing system 120. This is illustrated
by solid arrow 171, where the server configuration metadata from
server configuration metadata module 112 is sent to host processing
system 120. This prepares host processing system 120 for workload
111 by creating VM 121. VM 121 can be created by VMM 110 directly,
by VMM 110 in cooperation with a virtual machine hypervisor (not
illustrated) in host processing system 120, or by another mechanism
for creating virtual machines on a host processing system. VMM 110
also loads VM 121 with the virtual machine image information from
virtual machine image module 114, as illustrated by solid arrow
172. In this way, VM 121 implements the operating system, and can
run the software application to perform the computing task.
Further, VMM 110 configures switch network 140 with the network
configuration metadata from network configuration metadata module
116, as illustrated by solid arrow 173. Here, VMM 110 utilizes port
161 as a communication channel to switch network 140, to
communicate the profiles from network configuration metadata module
116 to the switch elements in switch network 140, in order to
properly configure the switch elements to create a contiguous data
path between VM 121 and storage device 150. In this way, VM 121
obtains data communication access to storage device 150, including
the necessary bandwidth and associated VLAN identifiers, priority
or quality-of-service policies, ACL rules, mirroring policies, and
other configuration parameters.
[0019] When a virtual machine monitor determines that the
processing resources of a host processing systems can be better
utilized, or that the workloads can be more balanced between host
processing systems, or based upon another determination, then the
virtual machine monitor can migrate workloads between host
processing systems. In the illustrated example, workload 111 is
migrated from VM 121 on host processing system 120 to VM 131 on
host processing system 130. VMM 110 instantiates workload 111 onto
host processing system 130 by creating VM 131 on host processing
system 130. This is illustrated by dashed arrow 181, where the
server configuration metadata from server configuration metadata
module 112 is sent to host processing system 130, preparing host
processing system 130 for workload 111 by creating VM 131. VMM 110
also loads VM 131 with the virtual machine image information from
virtual machine image module 114, as illustrated by dashed arrow
182, such that VM 131 implements the operating system, and can run
the software application to perform the computing task. Further,
VMM 110 configures switch network 140 with the network
configuration metadata from network configuration metadata module
116, as illustrated by dashed arrow 183, utilizing port 162 as a
communication channel to switch network 140 to properly configure
the switch elements to create a contiguous data path between VM 131
and storage device 150. In another embodiment (not illustrated),
after migration of workload 111 from VM 121 to VM 131, VMM 110
deletes VM 121 from host processing system 120 to free up resources
of host processing system 120. In this way, VMM 110 maintains and
controls not only the creation, deletion, and migration of virtual
machines in network system 100, but also controls the configuration
of switch network 140, eliminating the need for a separate
mechanism to maintain switch network 140.
[0020] In a particular embodiment, VMM 110 configures the switch
elements in switch network 140 through in-band communications
through ports 161 and 162. For example, VMM 110 can initiate an
Ethernet transaction wherein VMM 110 addresses an Ethernet frame to
each particular switch element in switch network 140. The Ethernet
frame includes key information that identifies the various virtual
machines implemented on network system 100, and profile information
for each of the various virtual machines. Each switch element
receives the Ethernet frame, determines if it is the target of the
Ethernet frame, decodes the frame to identify the key information
and profile information associated with the various virtual
machines, and implements the switching functions called for in the
profile information. In a particular embodiment, the Ethernet frame
is in accordance with the IEEE 802.3 standard.
[0021] FIG. 2 illustrates one embodiment of an Ethernet frame 200
for a network configuration transaction. Frame 200 includes a
destination address (DA) field 210, a source address (SA) field
220, an Ethertype field 230 labeled "CONFIGURE_NETWORK," and a
protocol data unit (PDU) field 240 labeled "Network Configuration".
DA field 210 consists of 6 bytes and includes the address of the
target device that is the destination of Ethernet frame 200. SA
field 220 consists of 6 bytes and includes the address of the
initiating device that is the source of Ethernet frame 200.
Ethertype field 230 consists of 2 bytes and includes a coded
description of the fact that Ethernet frame 200 is a network
configuration type Ethernet frame. PDU field 240 consists of
between 46 and 1500 bytes.
[0022] PDU field 240 includes an operation code (opcode) field 242,
a sequence number field 244, one or more key type/length/value
(TLV) fields 246, and one or more associated configuration
parameter TLV fields 248. Opcode field 242 encodes the operation
being performed by the particular Ethernet frame 200. In a
particular embodiment, opcode field 242 consists of 1 byte. For
example, a value of "1" can specify a "Create Profile" opcode,
indicating that a new profile is to be created on the target switch
element for each key TLV 246, and that the target switch element is
to be configured according to the profile information included in
the associated configuration parameter TLV 248. A value of "2" can
specify an "Add Configuration" opcode, indicating that the profile
identified by each key TLV 246 is to be amended to add the profile
information included in each associated configuration parameter TLV
248. A value of "3" can specify a "Modify Profile" opcode,
indicating that the profile identified by each key TLV 246 is to be
modified with the profile information included in each associated
configuration parameter TLV 248. A value of "4" can specify a
"Delete Profile" opcode, indicating that the profile identified by
each key TLV 246 is to be deleted form the target switch element. A
value of "5" can specify a "Reply/Success" opcode that is sent by
the target switch element to the initiating device, and indicating
that a Configure Network Ethernet frame has been received by the
target switch element, and that the specified operation was
successfully performed by the target switch element. A value of "6"
can specify a "Reply/Failure" opcode that is sent by the target
switch element to the initiating device, and indicating that a
Configure Network Ethernet frame has been received by the target
switch element, and that the specified operation was not
successfully performed by the target switch element. Thus if
opcodes 1-4 are specified, then Ethernet frame 200 is a request
frame, where, if opcodes 5-6 are specified, then Ethernet frame 200
is a reply frame. Each successive request frame is given a unique
sequence number that is placed in sequence number field 244 of a
request frame, and the corresponding reply frame includes the same
unique sequence number in sequence number field 244. For example,
sequence number field 244 can consist of 2 bytes, permitting up to
256 sequential Ethernet transactions between VMM 110 and a
particular switch element in switch network 140.
[0023] TLV fields 246 and 248 each include a type field 250, a
length field 252, an Organizationally Unique Identifier (OUI) field
254, a sub-type field 256, and an information field 258. Type field
250 can identify the beginning of the particular TLV 246, and
sub-type field 256 can indicate whether the particular TLV field is
a key TLV field 246 or a configuration parameter TLV field 248.
Length field 252 includes an indication, in bytes, of the length of
information field 258. When the TLV is a key TLV 246, information
field 258 includes the key information that is used to associate
data traffic through the switch element with the virtual machine
that is the target or initiator of the data traffic. When the TLV
is a configuration parameter TLV 248, information field 258
includes configuration information whereby the switch element is
configured to route the data traffic associated with the identified
virtual machine. OUI field 254 consists of 3 bytes, and includes
information that uniquely identifies the vendor, manufacturer, or
other organization associated with the switch element or VMM
110.
[0024] Thus, for example, when the network administrator initially
deploys workload 111, network configuration metadata module 116 can
include information such that host processing system 120 sends a
CONFIGURE_NETWORK Ethernet frame 200 with a value of "1" in opcode
field 242 to switch network 140 to create a network profile with
the desired network configuration information to provide a
contiguous data path between VM 121 and storage device 150. The
addressed network switches in switch network 140 reply with a
CONFIGURE_NETWORK Ethernet frame 200 with a value of "5" or "6" in
opcode field 242 to host processing system 120 to indicate that the
addressed network switches were either successful or unsuccessful
in creating the network profile. If, thereafter, the network
administrator desires to add network configuration information to
the network profile, or modify the network configuration
information in the network profile, then workload 111 is modified
such that the network configuration metadata in network
configuration metadata module 116 includes the additions or
modifications. In response, host processing system 120 sends a
CONFIGURE_NETWORK Ethernet frame 200 with a value of "2" or "3" in
opcode field 242 to switch network 140 to add to or modify,
respectively, the network profile, and the addressed network
switches reply with a CONFIGURE_NETWORK Ethernet frame 200 with a
value of "5" or "6" in opcode field 242 to host processing system
120.
[0025] When the network administrator migrates workload 111 from VM
121 to VM 131, then host processing system 130 sends a
CONFIGURE_NETWORK Ethernet frame 200 with a value of "1" in opcode
field 242 to switch network 140 to create a network profile with
the desired network configuration information to provide a
contiguous data path between VM 131 and storage device 150, and the
addressed network switches reply with a CONFIGURE_NETWORK Ethernet
frame 200 with a value of "5" or "6" in opcode field 242 to host
processing system 130. In addition, host processing system 130
sends a CONFIGURE_NETWORK Ethernet frame 200 with a value of "4" in
opcode field 242 to switch network 140 to delete the network
profile associated with VM 121, and the addressed network switches
reply with a CONFIGURE_NETWORK Ethernet frame 200 with a value of
"5" or "6" in opcode field 242 to host processing system 120.
[0026] FIG. 3 illustrates a method for reconfiguring network
services in dynamic virtualization environments in a flowchart
form, in accordance with an embodiment of the present disclosure.
The method starts at block 302. A workload is configured in a
virtual machine monitor in block 304. For example, workload 111 can
be created on VMM 110, including providing server configuration
metadata for server configuration metadata module 112, virtual
machine image information for virtual machine image module 114, and
network configuration metadata information for network
configuration metadata module 116. The virtual machine monitor
sends the server configuration metadata to a selected host
processing system in block 306. Thus, VMM 110 can determine that
workload 111 is suitably executed on host processing system 120,
and can send the server configuration metadata from server
configuration metadata module 112 to host processing system 120.
The virtual machine monitor instantiates a virtual machine on the
selected host processing system in block 308. Here, VMM 110 can
instantiate VM 121 on host processing system 120 and virtual
machine image module 114 can send the virtual machine image
information to VM 121. The virtual machine monitor sends the
network configuration metadata to the selected host processing
system in block 310. Thus, VMM 110 can send the network
configuration metadata from network configuration metadata module
116 to host processing system 120.
[0027] The selected host processing system sends a network
configuration request to the switch network to configure the switch
network to provide a contiguous data path between the virtual
machine and the resources on the switch network in block 312. For
example, host processing system 120 can format the network
configuration metadata from network configuration metadata module
116 into Ethernet frame 200 to configure a path over switch network
140 between VM 121 and storage device 150. The selected host
processing system receives a configuration reply from the switch
network in block 314. Thus, switch network 140 can send Ethernet
frame 200 to host processing system 120, to reply as to whether or
not the configuration request was executed successfully. A decision
is made as to whether or not the configuration request was executed
successfully in decision block 316. If not, the "NO" branch of
decision block 316 is taken and the processing returns to block
312, where the selected host processing system resends the network
configuration request to the switch network. If the configuration
request was executed successfully, the "YES" branch of decision
block 316 is taken and a decision is made as to whether or not the
workload is to be migrated in decision block 318. If not, the "NO"
branch of decision block 318 is taken, and processing loops back to
decision block 318 until the workload is to be migrated. When the
workload is to be migrated, the "YES" branch of decision block 318
is taken and processing returns to block 306, where the virtual
machine monitor sends the server configuration metadata to a newly
selected host processing system. For example, server configuration
metadata module 112 can send the server configuration metadata to
host processing system 130, and the process continues as described
above. In another embodiment (not illustrated), when the "YES"
branch of decision block 316 is taken, indicating that the
configuration request was executed successfully, then host
processing system 120 sends a CONFIGURE_NETWORK Ethernet frame 200
with a value of "4" in opcode field 242 to switch network 140 to
delete the network profile associated with VM 121, and the
addressed network switches reply with a CONFIGURE_NETWORK Ethernet
frame 200 with a value of "5" or "6" in opcode field 242 to host
processing system 120. In this way, unused network profiles are
removed from switch network 140.
[0028] In a particular embodiment, an information handling system
can be used to function as one or more of the network systems, or
carry out one or more of the methods described above. In another
embodiment, one or more of the systems described above can be
implemented in the form of an information handling system. FIG. 4
illustrates a functional block diagram of an embodiment of an
information handling system, generally designated as 400.
Information handling system 400 includes processor 410, a chipset
420, a memory 430, a graphics interface 440, an input/output (I/O)
interface 450, a disk controller 460, a network interface 470, and
a disk emulator 480.
[0029] Processor 410 is coupled to chipset 420. Chipset 420
supports processor 410, allowing processor 410 to process
machine-executable code. In a particular embodiment (not
illustrated), information handling system 400 includes one or more
additional processors, and chipset 420 supports the multiple
processors, allowing for simultaneous processing by each of the
processors, permitting the exchange of information between the
processors and the other elements of information handling system
400. Processor 410 can be coupled to chipset 420 via a unique
channel, or via a bus that shares information between processor
410, chipset 420, and other elements of information handling system
400.
[0030] Memory 430 is coupled to chipset 420. Memory 430 can be
coupled to chipset 420 via a unique channel, or via a bus that
shares information between chipset 420, memory 430, and other
elements of information handling system 400. In particular, a bus
can share information between processor 410, chipset 420 and memory
430. In a particular embodiment (not illustrated), processor 410 is
coupled to memory 430 through a unique channel. In accordance with
another aspect (not illustrated), an information handling system
can include a separate memory dedicated to each of the processors.
A non-limiting example of memory 430 includes static, dynamic. Or
non-volatile random access memory (SRAM, DRAM, or NVRAM), read only
memory (ROM), flash memory, another type of memory, or any
combination thereof.
[0031] Graphics interface 440 is coupled to chipset 420. Graphics
interface 440 can be coupled to chipset 420 via a unique channel,
or via a bus that shares information between chipset 420, graphics
interface 440, and other elements of information handling system
400. Graphics interface 440 is coupled to a video display 444.
Other graphics interfaces (not illustrated) can also be used in
addition to graphics interface 440 if needed or desired. Video
display 444 can include one or more types of video displays, such
as a flat panel display or other type of display device.
[0032] I/O interface 450 is coupled to chipset 420. I/O interface
450 can be coupled to chipset 420 via a unique channel, or via a
bus that shares information between chipset 420, I/O interface 450,
and other elements of information handling system 400. Other I/O
interfaces (not illustrated) can also be used in addition to I/O
interface 450 if needed or desired. I/O interface 450 is coupled to
one or more add-on resources 454. Add-on resource 454 can also
include another data storage system, a graphics interface, a
network interface card (NIC), a sound/video processing card,
another suitable add-on resource or any combination thereof.
[0033] Network interface device 470 is coupled to I/O interface
450. Network interface 470 can be coupled to I/O interface 450 via
a unique channel, or via a bus that shares information between I/O
interface 450, network interface 470, and other elements of
information handling system 400. Other network interfaces (not
illustrated) can also be used in addition to network interface 470
if needed or desired. Network interface 470 can be a network
interface card (NIC) disposed within information handling system
400, on a main circuit board (e.g., a baseboard, a motherboard, or
any combination thereof), integrated onto another component such as
chipset 420, in another suitable location, or any combination
thereof. Network interface 470 includes a network channel 472 that
provide interfaces between information handling system 400 and
other devices (not illustrated) that are external to information
handling system 400. Network interface 470 can also include
additional network channels (not illustrated).
[0034] Disk controller 460 is coupled to chipset 410. Disk
controller 460 can be coupled to chipset 420 via a unique channel,
or via a bus that shares information between chipset 420, disk
controller 460, and other elements of information handling system
400. Other disk controllers (not illustrated) can also be used in
addition to disk controller 460 if needed or desired. Disk
controller 460 can include a disk interface 462. Disk controller
460 can be coupled to one or more disk drives via disk interface
462. Such disk drives include a hard disk drive (HDD) 464 or an
optical disk drive (ODD) 466 (e.g., a Read/Write Compact Disk
(R/W-CD), a Read/Write Digital Video Disk (R/W-DVD), a Read/Write
mini Digital Video Disk (R/W mini-DVD), or another type of optical
disk drive), or any combination thereof. Additionally, disk
controller 460 can be coupled to disk emulator 480. Disk emulator
480 can permit a solid-state drive 484 to be coupled to information
handling system 400 via an external interface. The external
interface can include industry standard busses (e.g., USB or IEEE
1384 (Firewire)) or proprietary busses, or any combination thereof.
Alternatively, solid-state drive 484 can be disposed within
information handling system 400.
[0035] In the embodiments described above, an information handling
system can include any instrumentality or aggregate of
instrumentalities operable to compute, classify, process, transmit,
receive, retrieve, originate, switch, store, display, manifest,
detect, record, reproduce, handle, or use any form of information,
intelligence, or data for business, scientific, control,
entertainment, or other purposes. For example, an information
handling system can be a personal computer, a PDA, a consumer
electronic device, a network server or storage device, a switch
router, wireless router, or other network communication device, or
any other suitable device and can vary in size, shape, performance,
functionality, and price. The information handling system can
include memory (volatile (e.g. random-access memory, etc.),
nonvolatile (read-only memory, flash memory etc.) or any
combination thereof), one or more processing resources, such as a
central processing unit (CPU), a graphics processing unit (GPU),
hardware or software control logic, or any combination thereof.
Additional components of the information handling system can
include one or more storage devices, one or more communications
ports for communicating with external devices, as well as, various
input and output (I/O) devices, such as a keyboard, a mouse, a
video/graphic display, or any combination thereof. The information
handling system can also include one or more buses operable to
transmit communications between the various hardware components.
Portions of an information handling system may themselves be
considered information handling systems.
[0036] When referred to as a "device," a "module," or the like, the
embodiments described above can be configured as hardware, software
(which can include firmware), or any combination thereof. For
example, a portion of an information handling system device may be
hardware such as, for example, an integrated circuit (such as an
Application Specific Integrated Circuit (ASIC), a Field
Programmable Gate Array (FPGA), a structured ASIC, or a device
embedded on a larger chip), a card (such as a Peripheral Component
Interface (PCI) card, a PCI-express card, a Personal Computer
Memory Card International Association (PCMCIA) card, or other such
expansion card), or a system (such as a motherboard, a
system-on-a-chip (SoC), or a stand-alone device). Similarly, the
device could be software, including firmware embedded at a device,
such as a Pentium class or PowerPC.TM. brand processor, or other
such device, or software capable of operating a relevant
environment of the information handling system. The device could
also be a combination of any of the foregoing examples of hardware
or software. Note that an information handling system can include
an integrated circuit or a board-level product having portions
thereof that can also be any combination of hardware and
software.
[0037] Devices, modules, resources, or programs that are in
communication with one another need not be in continuous
communication with each other, unless expressly specified
otherwise. In addition, devices, modules, resources, or programs
that are in communication with one another can communicate directly
or indirectly through one or more intermediaries.
[0038] Although only a few exemplary embodiments have been
described in detail above, those skilled in the art will readily
appreciate that many modifications are possible in the exemplary
embodiments without materially departing from the novel teachings
and advantages of the embodiments of the present disclosure.
Accordingly, all such modifications are intended to be included
within the scope of the embodiments of the present disclosure as
defined in the following claims. In the claims, means-plus-function
clauses are intended to cover the structures described herein as
performing the recited function and not only structural
equivalents, but also equivalent structures.
* * * * *