U.S. patent application number 13/539992 was filed with the patent office on 2014-01-02 for method and system for providing inter-cloud services.
This patent application is currently assigned to VMware, Inc.. The applicant listed for this patent is Guy Hussussian, John Kilroy, Jagannath N. Raghu. Invention is credited to Guy Hussussian, John Kilroy, Jagannath N. Raghu.
Application Number | 20140006482 13/539992 |
Document ID | / |
Family ID | 49779310 |
Filed Date | 2014-01-02 |
United States Patent
Application |
20140006482 |
Kind Code |
A1 |
Raghu; Jagannath N. ; et
al. |
January 2, 2014 |
METHOD AND SYSTEM FOR PROVIDING INTER-CLOUD SERVICES
Abstract
The present application is directed to a distributed-services
component of a distributed system that facilitates multi-cloud
aggregation using a cloud-connector server and cloud-connector
nodes that cooperate to provide services that are distributed
across multiple clouds. These services include the transfer of
virtual-machine containers, or workloads, between two different
clouds and remote management interfaces.
Inventors: |
Raghu; Jagannath N.; (Palo
Alto, CA) ; Kilroy; John; (Cambridge, MA) ;
Hussussian; Guy; (Palo Alto, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Raghu; Jagannath N.
Kilroy; John
Hussussian; Guy |
Palo Alto
Cambridge
Palo Alto |
CA
MA
CA |
US
US
US |
|
|
Assignee: |
VMware, Inc.
Palo Alto
CA
|
Family ID: |
49779310 |
Appl. No.: |
13/539992 |
Filed: |
July 2, 2012 |
Current U.S.
Class: |
709/203 |
Current CPC
Class: |
H04L 67/10 20130101;
G06F 9/5077 20130101; G06Q 40/125 20131203; G06Q 10/105
20130101 |
Class at
Publication: |
709/203 |
International
Class: |
G06F 15/16 20060101
G06F015/16 |
Claims
1. A distributed-services component of a
multiple-cloud-computing-facility aggregation, the
distributed-service component comprising: a cloud-connector server
that provides an electronic cloud-connector server interface
through which a cloud-connector-server user interface is displayed
on a remote computer and cloud-connector-server-provided
distributed services are accessed from a remote computer, and that
provides an electronic cloud-connector-node interface through which
the cloud-connector server requests services provided by remote
cloud-connector nodes; and two or more cloud-connector nodes, each
installed in a different cloud-computing facility that each
provides an electronic interface through which the cloud-connector
server accesses services provided by the cloud-connector node and
that each accesses a cloud-management interface within the
cloud-computing facility in which the cloud-connector node is
installed.
2. The distributed-services component of claim 1 wherein the
multiple-cloud-computing-facility aggregation comprises: at least
one cloud-computing facility managed by a virtual-data-center
server; and additional cloud-computing facilities that are
operationally and geographically distinct from the at least one
cloud-computing facility managed by the virtual-data-center
server.
3. The distributed-services component of claim 1 wherein the
multiple-cloud-computing-facility aggregation comprises: at least
one cloud-computing facility that includes two or more organization
virtual data centers managed by a cloud director; and additional
cloud-computing facilities that are operationally and
geographically distinct from the at least one cloud-computing
facility managed by the cloud director.
4. The distributed-services component of claim 1 wherein the
multiple-cloud-computing-facility aggregation comprises: at least
one cloud-computing facility managed by a management system that is
neither a cloud director nor a virtual-data-center management
server; and additional cloud-computing facilities that are
operationally and geographically distinct from the at least one
cloud-computing facility managed by the at least one
cloud-computing facility managed by a management system that is
neither a cloud director nor a virtual-data-center management
server.
5. The distributed-services component of claim 1 wherein the
multiple-cloud-computing-facility aggregation comprises at least
two cloud-computing facilities managed by two different types of
management systems.
6. The distributed-services component of claim 1 wherein each
cloud-connector node is a virtual appliance that executes within a
management system of a cloud-computing system selected from among a
virtual-data-center management server, cloud director, and a
management system that is neither a cloud director nor a
virtual-data-center management server.
7. The distributed-services component of claim 6 wherein a
cloud-connector node comprises: an application program interface
through which the cloud-connector server requests services provided
by the cloud-connector node; an authorization service that
authorizes access to the cloud-connector node and to services
provided by the cloud-connector node; service routines that, when
executed in response to a request received through the application
program interface, carry out the request and provide a response to
the request; a database that stores configuration data for the
cloud-connector node; adapters that provide access by the
cloud-connector node to a file system and a management interface of
a cloud-computing-facility management system; and a messaging
protocol and network transfer services that together provide for
transfer of data files to remote cloud-connector nodes.
8. The distributed-services component of claim 7 wherein the
cloud-connector node wherein the messaging protocol and network
transfer services provide for checkpoint-restart of interrupted or
failed data-transfer operations.
9. The distributed-services component of claim 7 wherein the
cloud-connector node provides, through the application program
interface, a login service, a parameterized service request that
invokes a particular parameter-specified service, a data-upload
service that receives and stores data transmitted to the
cloud-connector node by the cloud-connector server, a file-transfer
service that, when requested by the cloud-connector server,
transfers a file from the cloud-connector node to a different
cloud-connector node; and a file-transfer service that, when
requested by the cloud-connector server, transfers a file from a
different cloud-connector node to the cloud-connector node.
10. The distributed-services component of claim 7 wherein the
application program interface is accessed by the cloud-connector
server through the representational state transfer protocol via a
hypertext transfer protocol proxy server.
11. A cloud-connector node that executes within a cloud-computing
facility and that is managed by a remote cloud-connector server,
the cloud-connector node comprising a virtual appliance within a
management system of the cloud-computing facility and further
comprising: an application program interface through which the
remote cloud-connector server requests services provided by the
cloud-connector node; an authorization service that authorizes
access to the cloud-connector node and to services provided by the
cloud-connector node; service routines that, when executed in
response to a request received through the application program
interface, carry out the request and provide a response to the
request; a database that stores configuration data for the
cloud-connector node; adapters that provide access, by the service
routines within the cloud-connector node, to a file system and a
management interface of a cloud-computing-facility management
system; and a messaging protocol and network transfer services that
together provide for transfer of data files to remote
cloud-connector nodes.
12. The cloud-connector node of claim 11 wherein the
cloud-connector node is installed in a virtual-data-center server
that manages a virtual data center within the cloud-computing
facility.
13. The cloud-connector node of claim 11 wherein the
cloud-connector node is installed in a cloud director that manages
organization virtual data centers within the cloud-computing
facility.
14. The cloud-connector node of claim 11 wherein the
cloud-connector node is installed in a management system that is
neither a cloud director nor a virtual-data-center management
server, the management system managing the cloud-computing
facility.
15. The cloud-connector node of claim 11 wherein the
cloud-connector server is located within a cloud-computing facility
that is geographically and operationally remote from the
cloud-computing facility within which the cloud-connector node
executes.
16. The cloud-connector node of claim 11 wherein the
cloud-connector node wherein the messaging protocol and network
transfer services provide for checkpoint-restart of interrupted or
failed data-transfer operations.
17. The cloud-connector node of claim 11 wherein the
cloud-connector node provides, through the application program
interface, a login service, a parameterized service request that
invokes a particular parameter-specified service, a data-upload
service that receives and stores data transmitted to the
cloud-connector node by the cloud-connector server, a file-transfer
service that, when requested by the cloud-connector server,
transfers a file from the cloud-connector node to a different
cloud-connector node; and a file-transfer service that, when
requested by the cloud-connector server, transfers a file from a
different cloud-connector node to the cloud-connector node.
18. The cloud-connector node of claim 11 wherein the application
program interface is accessed by the cloud-connector server through
the representational state transfer protocol via a hypertext
transfer protocol proxy server.
19. The cloud-connector node of claim 11 that, together with the
cloud-connector server and additional remote cloud-connector nodes
in operationally distinct cloud-computing facilities, comprises a
distributed-services component of a
multiple-cloud-computing-facility aggregation.
20. A method for providing distributed services within multiple,
operationally distinct cloud-computing facilities, the method
comprising: installing, within one of the multiple, operationally
distinct cloud-computing facilities, a cloud-connector server that
provides an electronic cloud-connector server interface through
which a cloud-connector-server user interface is displayed on a
remote computer and cloud-connector-server-provided distributed
services are accessed from a remote computer and that provides an
electronic cloud-connector-node interface through which the
cloud-connector server requests services provided by remote
cloud-connector nodes; and installing two or more cloud-connector
nodes, each in a different cloud-computing facility, that each
provides an electronic interface through which the cloud-connector
server accesses services provided by the cloud-connector node and
that each accesses a cloud-management interface within the
cloud-computing facility in which the cloud-connector node is
installed.
21. A computer-readable data-storage device that stores digitally
encoded computer instructions that carry out a method that provides
distributed services within multiple, operationally distinct
cloud-computing facilities, the method comprising: installing,
within one of the multiple, operationally distinct cloud-computing
facilities, a cloud-connector server that provides an electronic
cloud-connector server interface through which a
cloud-connector-server user interface is displayed on a remote
computer and cloud-connector-server-provided distributed services
are accessed from a remote computer and that provides an electronic
cloud-connector-node interface through which the cloud-connector
server requests services provided by remote cloud-connector nodes;
and installing two or more cloud-connector nodes, each in a
different cloud-computing facility, that each provides an
electronic interface through which the cloud-connector server
accesses services provided by the cloud-connector node and that
each accesses a cloud-management interface within the
cloud-computing facility in which the cloud-connector node is
installed.
Description
TECHNICAL FIELD
[0001] The present patent application is directed to
virtual-machine-based computing and cloud computing and, in
particular, to methods and systems that provide inter-cloud
services.
BACKGROUND
[0002] The development and evolution of modem computing has, in
many ways, been facilitated by the power of logical abstraction.
Early computers were manually programmed by slow and tedious input
of machine instructions into the computers' memories. Over time,
assembly-language programs and assemblers were developed in order
to provide a level of abstraction, namely assembly-language
programs, above the machine-instruction hardware-interface level,
to allow programmers to more rapidly and accurately develop
programs. Assembly-language-based operations are more easily
encoded by human programmers than machine-instruction-based
operations, and assemblers provided additional features, including
assembly directives, routine calls, and a logical framework for
program development. The development of operating systems provided
yet another type of abstraction that provided programmers with
logical, easy-to-understand system-call interfaces to
computer-hardware functionality. As operating systems developed,
additional internal levels of abstraction were created within
operating systems, including virtual memory, implemented by
operating-system paging of memory pages between electronic memory
and mass-storage devices, which provided easy-to-use, linear
memory-address spaces much larger than could be provided by the
hardware memory of computer systems. Additional levels of
abstractions were created in the programming-language domain, with
compilers developed for a wide variety of compiled languages that
greatly advanced the ease of programming and the number and
capabilities of programming tools with respect those provided by
assemblers and assembly languages. Higher-level scripting languages
and special-purpose interpreted languages provided even higher
levels of abstraction and greater ease of application development
in particular areas. Similarly, block-based and sector-based
interfaces to mass-storage devices have been abstracted through
many levels of abstraction to modern database management systems,
which provide for high-available and fault-tolerant storage of
structured data that can be analyzed, interpreted, and manipulated
through powerful high-level query languages.
[0003] In many ways, a modern computer system can be thought of as
many different levels of abstractions along many different, often
interdependent, dimensions. More recently, powerful new levels of
abstraction have been developed with respect to virtual machines,
which provide virtual execution environments for application
programs and operating systems. Virtual-machine technology
essentially abstracts the hardware resources and interfaces of a
computer system on behalf of multiple virtual machines, each
comprising one or more application programs and an operating
system. Even more recently, the emergence of cloud computing
services can provide abstract interfaces to enormous collections of
geographically dispersed data centers, allowing computational
service providers to develop and deploy complex Internet-based
services that execute on tens or hundreds of physical servers
through abstract cloud-computing interfaces.
[0004] While levels of abstraction within computational facilities
are generally intended to be well organized and are often
hierarchically structured, with dependencies and interconnections
generally constrained to adjacent levels in the various
hierarchies, practically, there are often many interdependencies
that span multiple hierarchical levels and that pose difficult
design and implementation issues. As levels of abstraction continue
to be added to produce new and useful computational interfaces,
such as cloud-computing-services interfaces, designers, developers,
and users of computational tools continue to seek implementation
methods and strategies to efficiently and logically support
additional levels of abstraction.
SUMMARY
[0005] The present application is directed to a
distributed-services component of a distributed system that
facilitates multi-cloud aggregation using a cloud-connector server
and cloud-connector nodes that cooperate to provide services that
are distributed across multiple clouds. These services include the
transfer of virtual-machine containers, or workloads, between two
different clouds and remote management interfaces.
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] FIG. 1 provides a general architectural diagram for various
types of computers.
[0007] FIG. 2 illustrates an Internet-connected distributed
computer system.
[0008] FIG. 3 illustrates cloud computing.
[0009] FIG. 4 illustrates generalized hardware and software
components of a general-purpose compute system, such as a
general-purpose computer system having an architecture similar to
that shown in FIG. 1.
[0010] FIG. 5 illustrates one type of virtual machine and
virtual-machine execution environment.
[0011] FIG. 6 illustrates an OVF package.
[0012] FIG. 7 illustrates virtual data centers provided as an
abstraction of underlying physical-data-center hardware
components.
[0013] FIG. 8 illustrates virtual-machine components of a
virtual-data-center management server and physical servers of a
physical data center above which a virtual-data-center interface is
provided by the virtual-data-center management server.
[0014] FIG. 9 illustrates a cloud-director level of
abstraction.
[0015] FIG. 10 illustrates virtual-cloud-connector nodes ("VCC
nodes") and a VCC server, components of a distributed system that
provides multi-cloud aggregation and that includes a
cloud-connector server and cloud-connector nodes that cooperate to
provide services that are distributed across multiple clouds.
[0016] FIG. 11 illustrates the VCC server and VCC nodes in a
slightly different fashion than the VCC server and VCC nodes are
illustrated in FIG. 10.
[0017] FIG. 12 illustrates one implementation of a VCC node.
[0018] FIG. 13 provides a control-flow diagram that illustrates
configuration and registration of a VCC node through the
administrative interface provided as a web service.
[0019] FIGS. 14-16 provide control-flow diagrams that illustrate a
general service-request-processing operation of a VCC node.
[0020] FIGS. 17-19 illustrate a more complex, file-transfer service
request that may be issued by a VCC server to a VCC node, which, in
turn, interacts with a second VCC node to carry out the requested
file transfer.
DETAILED DESCRIPTION
[0021] As discussed above, modem computing can be considered to be
a collection of many different levels of abstraction above the
computing-hardware level that includes physical computer systems,
data-storage systems and devices, and communications networks. The
present application is related to a multi-cloud-aggregation level
of abstraction that provides homogenous-cloud and
heterogeneous-cloud distributed management services, each cloud
generally an abstraction of a large number of virtual resource
pools comprising processing, storage, and network resources, each
of which, in turn, can be considered to be a collection of
abstractions above underlying physical hardware devices.
[0022] The term "abstraction" is not, in any way, intended to mean
or suggest an abstract idea or concept. Computational abstractions
are tangible, physical interfaces that are implemented, ultimately,
using physical computer hardware, data-storage devices, and
communications systems. Instead, the term "abstraction" refers, in
the current discussion, to a logical level of functionality
encapsulated within one or more concrete, tangible,
physically-implemented computer systems with defined interfaces
through which electronically-encoded data is exchanged, process
execution launched, and electronic services are provided.
Interfaces may include graphical and textual data displayed on
physical display devices as well as computer programs and routines
that control physical computer processors to carry out various
tasks and operations and that are invoked through electronically
implemented application programming interfaces ("APIs") and other
electronically implemented interfaces. There is a tendency among
those unfamiliar with modern technology and science to misinterpret
the terms "abstract" and "abstraction" when used to describe
certain aspects of modern computing. For example, one frequently
encounters allegations that because a computational system is
described in terms of abstractions, functional layers, and
interfaces, that it is somehow different from a physical machine or
device. Such allegations are unfounded. One only needs to
disconnect a computer system or group of computer systems from
their respective power supplies to appreciate the physical, machine
nature of complex computer technologies. One also frequently
encounters statements made by those unfamiliar with modern
technology and science that characterize a computational technology
as being "only software," and thus not a machine or device.
Software is essentially a sequence of encoded symbols, such as a
printout of a computer program or digitally encoded computer
instructions sequentially stored in a file on an optical disk or
within an electromechanical mass-storage device. Software alone can
do nothing. It is only when encoded computer instructions are
loaded into an electronic memory within a computer system and
executed on a physical processor that so-called "software
implemented" functionality is provided. The digitally encoded
computer instructions are an essential control component of
processor-controlled machines and devices, no less essential than a
cam-shaft control system in an internal-combustion engine.
Multi-cloud aggregations, cloud-computing services, virtual-machine
containers and virtual machines, communications interfaces, and
many of the other topics discussed below are tangible, physical
components of physical, electro-optical-mechanical computer
systems.
[0023] FIG. 1 provides a general architectural diagram for various
types of computers. The computer system contains one or multiple
central processing units ("CPUs") 102-105, one or more electronic
memories 108 interconnected with the CPUs by a CPU/memory-subsystem
bus 110 or multiple busses, a first bridge 112 that interconnects
the CPU/memory-subsystem bus 110 with additional busses 114 and
116, or other types of high-speed interconnection media, including
multiple, high-speed serial interconnects. These busses or serial
interconnections, in turn, connect the CPUs and memory with
specialized processors, such as a graphics processor 118, and with
one or more additional bridges 120, which are interconnected with
high-speed serial links or with multiple controllers 122-127, such
as controller 127, that provide access to various different types
of mass-storage devices 128, electronic displays, input devices,
and other such components, subcomponents, and computational
resources. It should be noted that computer-readable data-storage
devices include optical and electromagnetic disks, electronic
memories, and other physical data-storage devices. Those familiar
with modem science and technology appreciate that electromagnetic
radiation and propagating signals do not store data for subsequent
retrieval, and can transiently "store" only a byte or less of
information per mile, far less information than needed to encode
even the simplest of routines.
[0024] Of course, there are many different types of computer-system
architectures that differ from one another in the number of
different memories, including different types of hierarchical cache
memories, the number of processors and the connectivity of the
processors with other system components, the number of internal
communications busses and serial links, and in many other ways.
However, computer systems generally execute stored programs by
fetching instructions from memory and executing the instructions in
one or more processors. Computer systems include general-purpose
computer systems, such as personal computers ("PCs"), various types
of servers and workstations, and higher-end mainframe computers,
but may also include a plethora of various types of special-purpose
computing devices, including data-storage systems, communications
routers, network nodes, tablet computers, and mobile
telephones.
[0025] FIG. 2 illustrates an Internet-connected distributed
computer system. As communications and networking technologies have
evolved in capability and accessibility, and as the computational
bandwidths, data-storage capacities, and other capabilities and
capacities of various types of computer systems have steadily and
rapidly increased, much of modern computing now generally involves
large distributed systems and computers interconnected by local
networks, wide-area networks, wireless communications, and the
Internet. FIG. 2 shows a typical distributed system in which a
large number of PCs 202-205, a high-end distributed mainframe
system 210 with a large data-storage system 212, and a large
computer center 214 with large numbers of rack-mounted servers or
blade servers all interconnected through various communications and
networking systems that together comprise the Internet 216. Such
distributed computing systems provide diverse arrays of
functionalities. For example, a PC user sitting in a home office
may access hundreds of millions of different web sites provided by
hundreds of thousands of different web servers throughout the world
and may access high-computational-bandwidth computing services from
remote computer facilities for running complex computational
tasks.
[0026] Until recently, computational services were generally
provided by computer systems and data centers purchased,
configured, managed, and maintained by service-provider
organizations. For example, an e-commerce retailer generally
purchased, configured, managed, and maintained a data center
including numerous web servers, back-end computer systems, and
data-storage systems for serving web pages to remote customers,
receiving orders through the web-page interface, processing the
orders, tracking completed orders, and other myriad different tasks
associated with an e-commerce enterprise.
[0027] FIG. 3 illustrates cloud computing. In the recently
developed cloud-computing paradigm, computing cycles and
data-storage facilities are provided to organizations and
individuals by cloud-computing providers. In addition, larger
organizations may elect to establish private cloud-computing
facilities in addition to, or instead of, subscribing to computing
services provided by public cloud-computing service providers. In
FIG. 3, a system administrator for an organization, using a PC 302,
accesses the organization's private cloud 304 through a local
network 306 and private-cloud interface 308 and also accesses,
through the Internet 310, a public cloud 312 through a public-cloud
services interface 314. The administrator can, in either the case
of the private cloud 304 or public cloud 312, configure virtual
computer systems and even entire virtual data centers and launch
execution of application programs on the virtual computer systems
and virtual data centers in order to carry out any of many
different types of computational tasks. As one example, a small
organization may configure and run a virtual data center within a
public cloud that executes web servers to provide an e-commerce
interface through the public cloud to remote customers of the
organization, such as a user viewing the organization's e-commerce
web pages on a remote user system 316.
[0028] Cloud-computing facilities are intended to provide
computational bandwidth and data-storage services much as utility
companies provide electrical power and water to consumers. Cloud
computing provides enoiuious advantages to small organizations
without the resources to purchase, manage, and maintain in-house
data centers. Such organizations can dynamically add and delete
virtual computer systems from their virtual data centers within
public clouds in order to track computational-bandwidth and
data-storage needs, rather than purchasing sufficient computer
systems within a physical data center to handle peak
computational-bandwidth and data-storage demands. Moreover, small
organizations can completely avoid the overhead of maintaining and
managing physical computer systems, including hiring and
periodically retraining information-technology specialists and
continuously paying for operating-system and
database-management-system upgrades. Furthermore, cloud-computing
interfaces allow for easy and straightforward configuration of
virtual computing facilities, flexibility in the types of
applications and operating systems that can be configured, and
other functionalities that are useful even for owners and
administrators of private cloud-computing facilities used by a
single organization.
[0029] FIG. 4 illustrates generalized hardware and software
components of a general-purpose compute system, such as a
general-purpose computer system having an architecture similar to
that shown in FIG. 1. The computer system 400 is often considered
to include three fundamental layers: (1) a hardware layer or level
402; (2) an operating-system layer or level 404; and (3) an
application-program layer or level 406. The hardware layer 402
includes one or more processors 408, system memory 410, various
different types of input-output ("I/O") devices 410 and 412, and
mass-storage devices 414. Of course, the hardware level also
includes many other components, including power supplies, internal
communications links and busses, specialized integrated circuits,
many different types of processor-controlled or
microprocessor-controlled peripheral devices and controllers, and
many other components. The operating system 404 interfaces to the
hardware level 402 through a low-level operating system and
hardware interface 416 generally comprising a set of non-privileged
computer instructions 418, a set of privileged computer
instructions 420, a set of non-privileged registers and memory
addresses 422, and a set of privileged registers and memory
addresses 424. In general, the operating system exposes
non-privileged instructions, non-privileged registers, and
non-privileged memory addresses 426 and a system-call interface 428
as an operating-system interface 430 to application programs
432-436 that execute within an execution environment provided to
the application programs by the operating system. The operating
system, alone, accesses the privileged instructions, privileged
registers, and privileged memory addresses. By reserving access to
privileged instructions, privileged registers, and privileged
memory addresses, the operating system can ensure that application
programs and other higher-level computational entities cannot
interfere with one another's execution and cannot change the
overall state of the computer system in ways that could
deleteriously impact system operation. The operating system
includes many internal components and modules, including a
scheduler 442, memory management 444, a file system 446, device
drivers 448, and many other components and modules. To a certain
degree, modern operating systems provide numerous levels of
abstraction above the hardware level, including virtual memory,
which provides to each application program and other computational
entities a separate, large, linear memory-address space that is
mapped by the operating system to various electronic memories and
mass-storage devices. The scheduler orchestrates interleaved
execution of various different application programs and
higher-level computational entities, providing to each application
program a virtual, stand-alone system devoted entirely to the
application program. From the application program's standpoint, the
application program executes continuously without concern for the
need to share processor resources and other system resources with
other application programs and higher-level computational entities.
The device drivers abstract details of hardware-component
operation, allowing application programs to employ the system-call
interface for transmitting and receiving data to and from
communications networks, mass-storage devices, and other I/O
devices and subsystems. The file system 436 facilitates abstraction
of mass-storage-device and memory resources as a high-level,
easy-to-access, file-system interface. Thus, the development and
evolution of the operating system has resulted in the generation of
a type of multi-faceted virtual execution environment for
application programs and other higher-level computational
entities.
[0030] While the execution environments provided by operating
systems have proved to be an enormously successful level of
abstraction within computer systems, the operating-system-provided
level of abstraction is nonetheless associated with difficulties
and challenges for developers and users of application programs and
other higher-level computational entities. One difficulty arises
from the fact that there are many different operating systems that
run within various different types of computer hardware. In many
cases, popular application programs and computational systems are
developed to run on only a subset of the available operating
systems, and can therefore be executed within only a subset of the
various different types of computer systems on which the operating
systems are designed to run. Often, even when an application
program or other computational system is ported to additional
operating systems, the application program or other computational
system can nonetheless run more efficiently on the operating
systems for which the application program or other computational
system was originally targeted. Another difficulty arises from the
increasingly distributed nature of computer systems. Although
distributed operating systems are the subject of considerable
research and development efforts, many of the popular operating
systems are designed primarily for execution on a single computer
system. In many cases, it is difficult to move application
programs, in real time, between the different computer systems of a
distributed computer system for high-availability, fault-tolerance,
and load-balancing purposes. The problems are even greater in
heterogeneous distributed computer systems which include different
types of hardware and devices running different types of operating
systems. Operating systems continue to evolve, as a result of which
certain older application programs and other computational entities
may be incompatible with more recent versions of operating systems
for which they are targeted, creating compatibility issues that are
particularly difficult to manage in large distributed systems.
[0031] For all of these reasons, a higher level of abstraction,
referred to as the "virtual machine," has been developed and
evolved to further abstract computer hardware in order to address
many difficulties and challenges associated with traditional
computing systems, including the compatibility issues discussed
above. FIG. 5 illustrates one type of virtual machine and
virtual-machine execution environment. FIG. 5 uses the same
illustration conventions as used in FIG. 4. In particular, the
computer system 500 in FIG. 5 includes the same hardware layer 502
as the hardware layer 402 shown in FIG. 4. However, rather than
providing an operating system layer directly above the hardware
layer, as in FIG. 4, the virtualized computing environment
illustrated in FIG. 5 features a virtualization layer 504 that
interfaces through a virtualization-layer/hardware-layer interface
506, equivalent to interface 416 in FIG. 4, to the hardware. The
virtualization layer provides a hardware-like interface 508 to a
number of virtual machines, such as virtual machine 510, executing
above the virtualization layer in a virtual-machine layer 512. Each
virtual machine includes one or more application programs or other
higher-level computational entities packaged together with an
operating system, such as application 514 and operating system 516
packaged together within virtual machine 510. Each virtual machine
is thus equivalent to the operating-system layer 404 and
application-program layer 406 in the general-purpose computer
system shown in FIG. 4. Each operating system within a virtual
machine interfaces to the virtualization-layer interface 508 rather
than to the actual hardware interface 506. The virtualization layer
partitions hardware resources into abstract virtual-hardware layers
to which each operating system within a virtual machine interfaces.
The operating systems within the virtual machines, in general, are
unaware of the virtualization layer and operate as if they were
directly accessing a true hardware interface. The virtualization
layer ensures that each of the virtual machines currently executing
within the virtual environment receive a fair allocation of
underlying hardware resources and that all virtual machines receive
sufficient resources to progress in execution. The
virtualization-layer interface 508 may differ for different
operating systems. For example, the virtualization layer is
generally able to provide virtual hardware interfaces for a variety
of different types of computer hardware. This allows, as one
example, a virtual machine that includes an operating system
designed for a particular computer architecture to run on hardware
of a different architecture. The number of virtual machines need
not be equal to the number of physical processors or even a
multiple of the number of processors. The virtualization layer
includes a virtual-machine-monitor module 518 that virtualizes
physical processors in the hardware layer to create virtual
processors on which each of the virtual machines executes. For
execution efficiency, the virtualization layer attempts to allow
virtual machines to directly execute non-privileged instructions
and to directly access non-privileged registers and memory.
However, when the operating system within a virtual machine
accesses virtual privileged instructions, virtual privileged
registers, and virtual privileged memory through the
virtualization-layer interface 508, the accesses result in
execution of virtualization-layer code to simulate or emulate the
privileged resources. The virtualization layer additionally
includes a kernel module 520 that manages memory, communications,
and data-storage machine resources on behalf of executing virtual
machines. The kernel, for example, maintains shadow page tables on
each virtual machine so that hardware-level virtual-memory
facilities can be used to process memory accesses. The kernel
additionally includes routines that implement virtual
communications and data-storage devices as well as device drivers
that directly control the operation of underlying hardware
communications and data-storage devices. Similarly, the kernel
virtualizes various other types of I/O devices, including
keyboards, optical-disk drives, and other such devices. The
virtualization layer essentially schedules execution of virtual
machines much like an operating system schedules execution of
application programs, so that the virtual machines each execute
within a complete and fully functional virtual hardware layer.
[0032] A virtual machine is encapsulated within a data package for
transmission, distribution, and loading into a virtual-execution
environment. One public standard for virtual-machine encapsulation
is referred to as the "open virtualization format" ("OVF"). The OVF
standard specifies a format for digitally encoding a virtual
machine within one or more data files. FIG. 6 illustrates an OVF
package. An OVF package 602 includes an OVF descriptor 604, an OVF
manifest 606, an OVF certificate 608, one or more disk-image files
610-611, and one or more resource files 612-614. The OVF package
can be encoded and stored as a single file or as a set of files.
The OVF descriptor 604 is an XML document 620 that includes a
hierarchical set of elements, each demarcated by a beginning tag
and an ending tag. The outermost, or highest-level, element is the
envelope element, demarcated by tags 622 and 623. The next-level
element includes a reference element 626 that includes references
to all files that are part of the OVF package, a disk section 628
that contains meta information about all of the virtual disks
included in the OVF package, a networks section 630 that includes
meta information about all of the logical networks included in the
OVF package, and a collection of virtual-machine configurations 632
which further includes hardware descriptions of each virtual
machine 634. There are many additional hierarchical levels and
elements within a typical OVF descriptor. The OVF descriptor is
thus a self-describing, XML file that describes the contents of an
OVF package. The OVF manifest 606 is a list of
cryptographic-hash-function-generated digests 636 of the entire OVF
package and of the various components of the OVF package. The OVF
certificate 608 is an authentication certificate 640 that includes
a digest of the manifest and that is cryptographically signed. Disk
image files, such as disk image file 610, are digital encodings of
the contents of virtual disks and resource files 612 are digitally
encoded content, such as operating-system images. A virtual machine
or a collection of virtual machines can thus be digitally encoded
as one or more files within an OVF package that can be transmitted,
distributed, and loaded using well-known tools for transmitting,
distributing, and loading files. A virtual appliance is a software
service that is delivered as a complete software stack installed
within one or more virtual machines that is encoded within an OVF
package.
[0033] The advent of virtual machines and virtual environments has
alleviated many of the difficulties and challenges associated with
traditional general-purpose computing. Machine and operating-system
dependencies can be significantly reduced or entirely eliminated by
packaging applications and operating systems together as virtual
machines and virtual appliances that execute within virtual
environments provided by virtualization layers running on many
different types of computer hardware. A next level of abstraction,
referred to as virtual data centers or virtual infrastructure,
provide a data-center interface to virtual data centers
computationally constructed within physical data centers. FIG. 7
illustrates virtual data centers provided as an abstraction of
underlying physical-data-center hardware components. In FIG. 7, a
physical data center 702 is shown below a virtual-interface plane
704. The physical data center consists of a virtual-data-center
management server 706 and any of various different computers, such
as PCs 708, on which a virtual-data-center management interface may
be displayed to system administrators and other users. The physical
data center additionally includes generally large numbers of server
computers, such as server computer 710, that are coupled together
by local area networks, such as local area network 712 that
directly interconnects server computer 710 and 714-720 and a
mass-storage array 722. The physical data center shown in FIG. 7
includes three local area networks 712, 724, and 726 that each
directly interconnects a bank of eight servers and a mass-storage
array. The individual server computers, such as server computer
710, each includes a virtualization layer and runs multiple virtual
machines. Different physical data centers may include many
different types of computers, networks, data-storage systems and
devices connected according to many different types of connection
topologies. The virtual-data-center abstraction layer 704, a
logical abstraction layer shown by a plane in FIG. 7, abstracts the
physical data center to a virtual data center comprising one or
more resource pools, such as resource pools 730-732, one or more
virtual data stores, such as virtual data stores 734-736, and one
or more virtual networks. In certain implementations, the resource
pools abstract banks of physical servers directly interconnected by
a local area network.
[0034] The virtual-data-center management interface allows
provisioning and launching of virtual machines with respect to
resource pools, virtual data stores, and virtual networks, so that
virtual-data-center administrators need not be concerned with the
identities of physical-data-center components used to execute
particular virtual machines. Furthermore, the virtual-data-center
management server includes functionality to migrate running virtual
machines from one physical server to another in order to optimally
or near optimally manage resource allocation, provide fault
tolerance, and high availability by migrating virtual machines to
most effectively utilize underlying physical hardware resources, to
replace virtual machines disabled by physical hardware problems and
failures, and to ensure that multiple virtual machines supporting a
high-availability virtual appliance are executing on multiple
physical computer systems so that the services provided by the
virtual appliance are continuously accessible, even when one of the
multiple virtual appliances becomes compute bound, data-access
bound, suspends execution, or fails. Thus, the virtual data center
layer of abstraction provides a virtual-data-center abstraction of
physical data centers to simplify provisioning, launching, and
maintenance of virtual machines and virtual appliances as well as
to provide high-level, distributed functionalities that involve
pooling the resources of individual physical servers and migrating
virtual machines among physical servers to achieve load balancing,
fault tolerance, and high availability.
[0035] FIG. 8 illustrates virtual-machine components of a
virtual-data-center management server and physical servers of a
physical data center above which a virtual-data-center interface is
provided by the virtual-data-center management server. The
virtual-data-center management server 802 and a virtual-data-center
database 804 comprise the physical components of the management
component of the virtual data center. The virtual-data-center
management server 802 includes a hardware layer 806 and
virtualization layer 808, and runs a virtual-data-center
management-server virtual machine 810 above the virtualization
layer. Although shown as a single server in FIG. 8, the
virtual-data-center management server ("VDC management server") may
include two or more physical server computers that support multiple
VDC-management-server virtual appliances. The virtual machine 810
includes a management-interface component 812, distributed services
814, core services 816, and a host-management interface 818. The
management interface is accessed from any of various computers,
such as the PC 708 shown in FIG. 7. The management interface allows
the virtual-data-center administrator to configure a virtual data
center, provision virtual machines, collect statistics and view log
files for the virtual data center, and to carry out other, similar
management tasks. The host-management interface 818 interfaces to
virtual-data-center agents 824, 825, and 826 that execute as
virtual machines within each of the physical servers of the
physical data center that is abstracted to a virtual data center by
the VDC management server.
[0036] The distributed services 814 include a distributed-resource
scheduler that assigns virtual machines to execute within
particular physical servers and that migrates virtual machines in
order to most effectively make use of computational bandwidths,
data-storage capacities, and network capacities of the physical
data center. The distributed services further include a
high-availability service that replicates and migrates virtual
machines in order to ensure that virtual machines continue to
execute despite problems and failures experienced by physical
hardware components. The distributed services also include a
live-virtual-machine migration service that temporarily halts
execution of a virtual machine, encapsulates the virtual machine in
an OVF package, transmits the OVF package to a different physical
server, and restarts the virtual machine on the different physical
server from a virtual-machine state recorded when execution of the
virtual machine was halted. The distributed services also include a
distributed backup service that provides centralized
virtual-machine backup and restore.
[0037] The core services provided by the VDC management server
include host configuration, virtual-machine configuration,
virtual-machine provisioning, generation of virtual-data-center
alarms and events, ongoing event logging and statistics collection,
a task scheduler, and a resource-management module. Each physical
server 820-822 also includes a host-agent virtual machine 828-830
through which the virtualization layer can be accessed via a
virtual-infrastructure application programming interface ("API").
This interface allows a remote administrator or user to manage an
individual server through the infrastructure API. The
virtual-data-center agents 824-826 access virtualization-layer
server information through the host agents. The virtual-data-center
agents are primarily responsible for offloading certain of the
virtual-data-center management-server functions specific to a
particular physical server to that physical server. The
virtual-data-center agents relay and enforce resource allocations
made by the VDC management server, relay virtual-machine
provisioning and configuration-change commands to host agents,
monitor and collect performance statistics, alarms, and events
communicated to the virtual-data-center agents by the local host
agents through the interface API, and to carry out other, similar
virtual-data-management tasks.
[0038] The virtual-data-center abstraction provides a convenient
and efficient level of abstraction for exposing the computational
resources of a cloud-computing facility to
cloud-computing-infrastructure users. A cloud-director management
server exposes virtual resources of a cloud-computing facility to
cloud-computing-infrastructure users. In addition, the cloud
director introduces a multi-tenancy layer of abstraction, which
partitions VDCs into tenant-associated VDCs that can each be
allocated to a particular individual tenant or tenant organization,
both referred to as a "tenant." A given tenant can be provided one
or more tenant-associated VDCs by a cloud director managing the
multi-tenancy layer of abstraction within a cloud-computing
facility. The cloud services interface (308 in FIG. 3) exposes a
virtual-data-center management interface that abstracts the
physical data center.
[0039] FIG. 9 illustrates a cloud-director level of abstraction. In
FIG. 9, three different physical data centers 902-904 are shown
below planes representing the cloud-director layer of abstraction
906-908. Above the planes representing the cloud-director level of
abstraction, multi-tenant virtual data centers 910-912 are shown.
The resources of these multi-tenant virtual data centers are
securely partitioned in order to provide secure virtual data
centers to multiple tenants, or cloud-services-accessing
organizations. For example, a cloud-services-provider virtual data
center 910 is partitioned into four different tenant-associated
virtual-data centers within a multi-tenant virtual data center for
four different tenants 916-919. Each multi-tenant virtual data
center is managed by a cloud director comprising one or more
cloud-director servers 920-922 and associated cloud-director
databases 924-926. Each cloud-director server or servers runs a
cloud-director virtual appliance 930 that includes a cloud-director
management interface 932, a set of cloud-director services 934, and
a virtual-data-center management-server interface 936. The
cloud-director services include an interface and tools for
provisioning multi-tenant virtual data center virtual data centers
on behalf of tenants, tools and interfaces for configuring and
managing tenant organizations, tools and services for organization
of virtual data centers and tenant-associated virtual data centers
within the multi-tenant virtual data center, services associated
with template and media catalogs, and provisioning of
virtualization networks from a network pool. Templates are virtual
machines that each contains an OS and/or one or more virtual
machines containing applications. A vAPP template may include much
of the detailed contents of virtual machines and virtual appliances
that are encoded within OVF packages, so that the task of
configuring a virtual machine or virtual appliance is significantly
simplified, requiring only deployment of one OVF package. These
templates are stored in catalogs within a tenant's virtual-data
center. These catalogs are used for developing and staging new
virtual appliances and published catalogs are used for sharing
templates in virtual appliances across organizations. Catalogs may
include OS images and other information relevant to construction,
distribution, and provisioning of virtual appliances.
[0040] Considering FIGS. 7 and 9, the VDC-server and cloud-director
layers of abstraction can be seen, as discussed above, to
facilitate employment of the virtual-data-center concept within
private and public clouds. However, this level of abstraction does
not fully facilitate aggregation of single-tenant and multi-tenant
virtual data centers into heterogeneous or homogeneous aggregations
of cloud-computing facilities. The present application is directed
to providing an additional layer of abstraction to facilitate
aggregation of cloud-computing facilities.
[0041] FIG. 10 illustrates virtual-cloud-connector nodes ("VCC
nodes") and a VCC server, components of a distributed system that
provides multi-cloud aggregation and that includes a
cloud-connector server and cloud-connector nodes that cooperate to
provide services that are distributed across multiple clouds. In
FIG. 10, seven different cloud-computing facilities are illustrated
1002-1008. Cloud-computing facility 1002 is a private multi-tenant
cloud with a cloud director 1010 that interfaces to a VDC
management server 1012 to provide a multi-tenant private cloud
comprising multiple tenant-associated virtual data centers. The
remaining cloud-computing facilities 1003-1008 may be either public
or private cloud-computing facilities and may be single-tenant
virtual data centers, such as virtual data centers 1003 and 1006,
multi-tenant virtual data centers, such as multi-tenant virtual
data centers 1004 and 1007-1008, or any of various different kinds
of third-party cloud-services facilities, such as third-party
cloud-services facility 1005. An additional component, the VCC
server 1014, acting as a controller is included in the private
cloud-computing facility 1002 and interfaces to a VCC node 1016
that runs as a virtual appliance within the cloud director 1010. A
VCC server may also run as a virtual appliance within a VDC
management server that manages a single-tenant private cloud. The
VCC server 1014 additionally interfaces, through the Internet, to
VCC node virtual appliances executing within remote VDC management
servers, remote cloud directors, or within the third-party cloud
services 1018-1023. The VCC server provides a VCC server interface
that can be displayed on a local or remote terminal, PC, or other
computer system 1026 to allow a cloud-aggregation administrator or
other user to access VCC-server-provided aggregate-cloud
distributed services. In general, the cloud-computing facilities
that together form a multiple-cloud-computing aggregation through
distributed services provided by the VCC server and VCC nodes are
geographically and operationally distinct.
[0042] FIG. 11 illustrates the VCC server and VCC nodes in a
slightly different fashion than the VCC server and VCC nodes are
illustrated in FIG. 10. In FIG. 11, the VCC server virtual machine
1102 is shown executing within a VCC server 1104, one or more
physical servers located within a private cloud-computing facility.
The VCC-server virtual machine includes a VCC-server interface 1106
through which a terminal, PC, or other computing device 1108
interfaces to the VCC server. The VCC server, upon request,
displays a VCC-server user interface on the computing device 1108
to allow a cloud-aggregate administrator or other user to access
VCC-server-provided functionality. The VCC-server virtual machine
additionally includes a VCC-node interface 1108 through which the
VCC server interfaces to VCC-node virtual appliances that execute
within VDC management servers, cloud directors, and third-party
cloud-computing facilities. As shown in FIG. 11, in one
implementation, a VCC-node virtual machine is associated with each
organization configured within and supported by a cloud director.
Thus, VCC nodes 1112-1114 execute as virtual appliances within
cloud director 1116 in association with organizations 1118-1120,
respectively. FIG. 11 shows a VCC-node virtual machine 1122
executing within a third-party cloud-computing facility and a
VCC-node virtual machine 1124 executing within a VDC management
server. The VCC server, including the services provided by the
VCC-server virtual machine 1102, in conjunction with the VCC-node
virtual machines running within remote VDC management servers,
cloud directors, and within third-party cloud-computing facilities,
together provide functionality distributed among the
cloud-computing-facility components of either heterogeneous or
homogeneous cloud-computing aggregates.
[0043] FIG. 12 illustrates one implementation of a VCC node. The
VCC node 1200 is a web service that executes within an
Apache/Tomcat container that runs as a virtual appliance within a
cloud director, VDC management server, or third-party
cloud-computing server. The VCC node exposes web services 1202 to a
remote VCC server via REST APIs accessed through the
representational state transfer ("REST") protocol 1204 via a
hypertext transfer protocol ("HTTP") proxy server 1206. The REST
protocol uses HTTP requests to post data and requests for services,
read data and receive service-generated responses, and delete data.
The web services 1202 comprise a set of internal functions that are
called to execute the REST APIs 1204. Authorization services are
provided by a spring security layer 1208. The internal functions
that implement the web services exposed by the REST APIs employ a
metadata/object-store layer implemented using an SQL Server
database 1210-1212, a storage layer 1214 with adapters 1216-1219
provides access to data stores 1220, file systems 1222, the
virtual-data-center management-server management interface 1224,
and the cloud-director management interface 1226. These adapters
may additional include adapters to 3.sup.rd-party cloud management
services, interfaces, and systems. The internal functions that
implement the web services may also access a message protocol 1230
and network transfer services 1232 that allow for transfer of OVF
packages and other files securely between VCC nodes via virtual
networks 1234 that virtualize underlying physical networks 1236.
The message protocol 1230 and network transfer services 1232
together provide for secure data transfer, multipart messaging, and
checkpoint-restart data transfer that allows failed data transfers
to be restarted from most recent checkpoints, rather than having to
be entirely retransmitted.
[0044] The VCC node, packaged inside an OVF container, is available
to the cloud-director servers and VDC management servers for
deployment as a virtual-appliance. The VCC node is deployed as a
virtual appliance, containing one virtual machine in this case, and
is launched within the cloud-director servers and VDC management
servers in the same fashion as any other virtual machine is
installed and launched in those servers. When the VCC node starts
up, a cloud administrator accesses an administrative interface
offered as one of the VCC node's web services. This administrative
interface, along with a similar administrative interface provided
as a web service by the VCC server running within a VDC management
server/cloud-director, allows an administrator of the cloud or
organization in which the VCC node is being installed to configure
and register the VCC node.
[0045] FIG. 13 provides a control-flow diagram that illustrates
configuration and registration of a VCC node through the
administrative interface provided as a web service by the VCC node.
In step 1302, an administrator of the cloud organization or cloud
into which the VCC node has been installed logs into the VCC node
through the VCC-node's administrative interface. Next, in step
1304, the administrator determines whether or not the VCC node has
been installed in a public cloud or a private cloud with secure
HTTP connections to external entities. When installed in a public
cloud with secure HTTP, the administrator configures the VCC node
as a public VCC node in step 1306, using a configuration tool or
input to the administrative interface. Otherwise, the administrator
configures the VCC node as a private VCC node, in step 1308. This
is done for the VCC-Server controller to direct requests to the
appropriate VCC node based on whether it is private or public. A
private VCC node is able to access all public VCC nodes and their
associated cloud services, whereas a public VCC node may not be
able to access a private VCC-node-backed cloud service in all
cases, as the private VCC node may lie behind a corporate firewall.
Next, in step 1310, the administrator inputs, through the
administrative interface, the IP address of the VCC server that
will connect to and manage the VCC node and inputs an
identification of the HTTP proxy server and port through which the
VCC node receives VCC-node application-program-interface ("API")
calls from the managing VCC server and from other VCC nodes. Next,
in step 1312, the administrator accesses the administrative
interface of the remote VCC server that will manage the VCC node in
order to register the VCC node with the remote VCC server. In step
1314, the administrator enters the IP address of the proxy server
through which the VCC node receives API calls and the HTTP port
through which the VCC node receives API calls. When the VCC node is
public, as determined again by the administrator in step 1316, the
administrator sets a public attribute to characterize the VCC node
to the VCC server through the VCC-server administrative interface
in step 1318. Otherwise, in step 1320, the administrator sets a
private attribute for the VCC node. Finally, in step 1322, the
administrator enters various additional information into the
VCC-server administrative interface to complete registration of the
VCC node. This information may include the URL for the organization
or cloud in which the VCC node is being installed, an indication of
the cloud type, such as, for example, whether the cloud is a
virtual-data center managed by a VDC management server or
organization virtual-data center managed by a cloud director. The
administrator additionally enters, through the VCC-server
administrative interface, various attributes that control the
process by which the VCC-server establishes connections with the
VCC node, including whether or not the VCC-server should use a
proxy to connect to the VCC node and whether or not a
secure-socket-layer ("SSL") certificate should be employed in
establishing and exchanging information through the connection.
Additional information entered by the administrator through the
VCC-server administrative interface may include the name and
password that the VCC server should use to log into the VCC node as
well as an indication of the type of services that the VCC node is
capable of performing on behalf of the VCC server. In many
implementations, a variety of different types of VCC nodes may be
installed into clouds, each type providing different services and
other capabilities to the VCC servers that manage them as well as
to other VCC nodes that request services from the VCC node.
[0046] Once a VCC node has been installed, launched, configured,
and registered within an organization cloud or a cloud managed by a
VDC management server, the VCC node essentially waits to receive
requests for login and for services through the VCC-node API-call
interface and, upon receiving such requests, fulfills them. Those
requests that involve multiple VCC nodes are fulfilled by
coordinating the requests with the other VCC nodes. The VCC nodes
act as a delegated, distributed set of remote servers within remote
clouds on behalf of the VCC server controller that manages the VCC
nodes.
[0047] FIGS. 14-16 provide control-flow diagrams that illustrate a
general service-request-processing operation of a VCC node. FIG. 14
provides a control-flow diagram for overall request processing by a
VCC node. In FIG. 14, as in subsequent FIGS. 15-19, actions
performed by two different entities are shown, with the actions
performed by the first entity shown in the left-hand portion of the
FIG. 1402 and actions performed by the second of the two entities
shown in the right-hand portion of FIG. 14 1404. In the case of
FIG. 14, the first entity is a VCC server and the second entity is
a VCC node. In step 1406, the VCC server, using the HTTP POST
command, transmits a login request to the VCC node, supplying, in
the request, the name and password for the VCC node and any other
information needed to be passed to the VCC node in order to request
login. In step 1408, the VCC node receives the login request and
processes the included information to determine whether or not the
login request should be carried out. When the login request is
determined to be valid, in step 1410, the VCC node returns an
indication of success, in step 1412. Otherwise, the VCC node
returns an indication of failure in step 1414. Receiving the
response from the VCC node, the VCC server determines whether or
not the response indicates a successful login, in step 1416. When
not, the VCC server may either retry the login request or undertake
other actions to handle the login-request failure. These additional
actions are not shown in FIG. 14, and are instead indicated by the
arrow 1418 emanating from step 1416. When the login request has
been successful, as determined in step 1416, the VCC server issues
a GET HTTP request command to request a particular service from the
VCC node, the GET request including, when specified by the API
interface, information, as parameters, needed by the VCC node to
service the request, in step 1420. In step 1422, the VCC node
receives the request for service, processes the request in step
1424, and returns a result in step 1426. The VCC server receives
the result, in step 1428, and continues with whatever VCC-server
tasks were underway at the time the login service was requested. In
certain implementations, when all service requests that need to be
issued by the VCC server have been issued and responses have been
received for the requests from the VCC node, the VCC server may
explicitly log out from the VCC node. There may be additional
VCC-server and VCC-node interactions involved when servicing of a
request fails.
[0048] FIG. 15 provides a control-flow diagram for the process
service request in step 1424 of FIG. 14 for a general service
request. In step 1502, the VCC node evaluates the parameters and
other information included with the service request and determines,
in step 1504, whether or not the request can be serviced. When the
request cannot be serviced, the VCC node returns a failure
indication in step 1506. Otherwise, the VCC node invokes one of the
web services that correspond to the request in step 1508 in order
to process the request. When the web service returns a response, in
step 1510, the VCC node determines whether or not the service has
been successfully processed in step 1512. When the request has not
been successfully processed, control flows to step 1506 in which an
indication of failure is returned to the VCC server. Otherwise, an
indication of success is returned to the VCC server, in step 1514,
along with additional information produced for the VCC server as
part of processing of the request.
[0049] FIG. 16 indicates an alternative version of the
process-service-request step 1424 in FIG. 14 when the request
involves transfer of information from the VCC server to the VCC
node. Steps 1602-1614 in FIG. 16 are identical to steps 1502-1514
in FIG. 15, and are not further described. However, in the case
that the request by the VCC server involves transfer of information
from the VCC server to the VCC node, upon receiving the response
from the VCC node, in step 1428, and in the case that the response
indicates success, as determined in step 1616, the VCC server then
issues an HTTP POST command to the VCC node, in step 1618, to
transfer the information to the VCC node, which receives this
information in step 1620. Additional steps, represented by arrow
1622, may be carried out by the VCC node upon receiving the data,
including returning an indication of successful reception.
[0050] FIGS. 17-19 illustrate a more complex, file-transfer service
request that may be issued by a VCC server to a VCC node, which, in
turn, interacts with a second VCC node to carry out the requested
file transfer. In step 1702, the VCC server receives a request from
a VDC management server or cloud director ("CD") to transfer a file
between two different clouds. In step 1704, the VCC server accesses
a database associated with the VCC server to determine the
characteristics of the clouds between which the files are to be
transferred as well as the characteristics of the VCC nodes managed
by the VCC server within those clouds. When the target cloud is
public, as determined in step 1706, the VCC server posts a login
request to the source VCC node for the file transfer in step 1708.
Otherwise, the VCC server posts a login request to the target VCC
node for the file transfer in step 1710. The VCC node which
receives either of these POST requests carries out
already-described login-request processing in steps 1712-1715. When
the login request is successful, as determined in step 1716 by the
VCC server, and when the target of the file transfer is a public
cloud in which a public VCC node is executing, as determined in
step 1718, the VCC server posts an upload-transfer request to the
source VCC node for the file transfer in step 1720. Otherwise, the
VCC server posts a download transfer request to the target VCC node
for the file transfer in step 1722.
[0051] FIG. 18 illustrates the remaining portion of the
file-transfer operation invoked by the VCC server in step 1720 in
FIG. 17. The source VCC node, in step 1802, receives the upload
transfer request and, in step 1804, using information provided by
the VCC server to the source VCC node, requests login to the target
VCC node. In steps 1806-1809, the target VCC node processes the
login request as previously described. When, in step 1810, the
source VCC node receives a response to the login request from the
target VCC node, and the response indicates a successful login, the
source VCC node issues a post command, in step 1812, to transfer
the file to the target VCC node. In step 1814, the target VCC node
receives the file and directs the file to an appropriate
destination within the cloud in which the target VCC node executes.
The destination may be specified by the VCC server in the initial
file-transfer request or may be determined by the target VCC node
using information included in the file-transfer request either by
the VCC server or by the source VCC node, depending on the
particular file-transfer command and on the particular
implementation. When the file transfer is carried out successfully,
as determined in step 1816, the target VCC node returns an
indication of success to the source VCC node in step 1818 and an
indication of success to the VCC server in step 1820. Otherwise,
when the file has not been successfully transferred, the target VCC
node may return an indication of error to the source VCC node and
initiate a checkpoint restart of the file transfer in step 1822.
Checkpoint restarts are made possible by logging checkpoints within
the file transfer on both the source VCC node and target VCC node
to allow the two VCC nodes to cooperate, when an error occurs
partway through the file transfer, to roll back the file-transfer
process to a most recent checkpoint, on both VCC nodes, and resume
the file transfer from that point.
[0052] FIG. 19 provides a control-flow diagram for completion of
the file-transfer operation initiated in step 1722 of FIG. 17.
Steps 1902 through 1910 are equivalent to steps 1802 through 1810,
in FIG. 18, and are not therefore further described. In step 1912,
a target VCC node issues an HTTPS GET request for the file to the
source VCC node. The source VCC node receives the GET request in
step 1914 and initiates file transfer to the target VCC node in
step 1916. The file is transferred to a destination either
specified by the VCC server in the initial file-transfer request,
or may be alternatively determined by either the target VCC node
using VCC-server-provided information or by the source VCC node, in
different implementations for different types of file transfers. In
step 1918, the target VCC node receives the file and, when the file
has been successfully transferred, as determined in step 1920,
returns success to the VCC server in step 1922. Otherwise, as in
step 1822 of FIG. 18, the target VCC node, in step 1924, may
initiate checkpoint restart of the file transfer by returning an
error indication to the source VCC node in step 1924. The VCC
server initiates an upload transfer, in step 1720 of FIG. 17, in
the case that the target VCC node is in a public cloud and is
characterized as a public VCC node, because a public VCC node is
not screened off by a firewall from the source VCC node. By
contrast, the VCC-server issues a download transfer request to the
target VCC node, in the case that the target VCC node resides in a
private cloud, because, in that case, the target VCC node is likely
to be prevented from receiving unsolicited data transfers by a
firewall or other security mechanisms.
[0053] Although the present invention has been described in terms
of particular embodiments, it is not intended that the invention be
limited to these embodiments. Modifications within the spirit of
the invention will be apparent to those skilled in the art. For
example, VCC-server and VCC-node functionality may be implemented
in virtual appliances using many different programming languages,
modular organizations, control structures, data structures, and by
varying other such implementation parameters. VCC nodes may be
implemented to adapt to, and interface to, a variety of different
types of other virtual appliances and functionalities within the
cloud-computing facility in which the VCC node resides. The ability
of the VCC server to access web services in remote cloud-computing
facilities through VCC nodes provides the ability for the VCC
server to access any number of different types of functionalities
through various different API-call interfaces provided by a variety
of different types of web services. Although the current
application has mentioned a number of specific examples, many
additional examples can be implemented and configured to extend the
functionality of the VCC server and cloud-aggregation management
applications and interfaces provided by the VCC server.
[0054] It is appreciated that the previous description of the
disclosed embodiments is provided to enable any person skilled in
the art to make or use the present disclosure. Various
modifications to these embodiments will be readily apparent to
those skilled in the art, and the generic principles defined herein
may be applied to other embodiments without departing from the
spirit or scope of the disclosure. Thus, the present disclosure is
not intended to be limited to the embodiments shown herein but is
to be accorded the widest scope consistent with the principles and
novel features disclosed herein.
* * * * *