U.S. patent application number 15/395192 was filed with the patent office on 2018-01-25 for technologies for efficiently identifying managed nodes available for workload assignments.
The applicant listed for this patent is Nishi Ahuja, Susanne M. Balle, Mrittika Ganguli, Rahul Khanna. Invention is credited to Nishi Ahuja, Susanne M. Balle, Mrittika Ganguli, Rahul Khanna.
Application Number | 20180027058 15/395192 |
Document ID | / |
Family ID | 60804962 |
Filed Date | 2018-01-25 |
United States Patent
Application |
20180027058 |
Kind Code |
A1 |
Balle; Susanne M. ; et
al. |
January 25, 2018 |
Technologies for Efficiently Identifying Managed Nodes Available
for Workload Assignments
Abstract
Technologies for identifying managed nodes available for
workload assignments include an orchestrator server to assign
workloads to the managed nodes and receive availability data from
the managed nodes, indicative of a determination by each of the
managed nodes as to an availability of the managed node to receive
an additional workload. The orchestrator server is also to receive
telemetry data from the managed nodes, indicative of resource
utilization by each of the managed nodes as the workloads are
performed. The orchestrator server is also to determine, as a
function of the availability data, a reduced set of available
managed nodes for analysis, determine, as a function of the
telemetry data, adjustments to the workload assignments to increase
the resource utilization among the reduced set of managed nodes,
and apply the determined adjustments to the reduced set of managed
nodes as the workloads are performed.
Inventors: |
Balle; Susanne M.; (Hudson,
NH) ; Khanna; Rahul; (Portland, OR) ; Ahuja;
Nishi; (University Place, WA) ; Ganguli;
Mrittika; (Bangalore, IN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Balle; Susanne M.
Khanna; Rahul
Ahuja; Nishi
Ganguli; Mrittika |
Hudson
Portland
University Place
Bangalore |
NH
OR
WA |
US
US
US
IN |
|
|
Family ID: |
60804962 |
Appl. No.: |
15/395192 |
Filed: |
December 30, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62365969 |
Jul 22, 2016 |
|
|
|
62376859 |
Aug 18, 2016 |
|
|
|
62427268 |
Nov 29, 2016 |
|
|
|
Current U.S.
Class: |
709/226 |
Current CPC
Class: |
G06F 3/0619 20130101;
G08C 2200/00 20130101; G11C 7/1072 20130101; H04L 49/35 20130101;
H04Q 2011/0052 20130101; H05K 2201/10121 20130101; G02B 6/4292
20130101; G06F 9/5077 20130101; G06F 2212/1024 20130101; G06F
2212/401 20130101; G06F 2212/7207 20130101; H04B 10/25891 20200501;
H04L 67/1014 20130101; G02B 6/3882 20130101; G08C 17/02 20130101;
H03M 7/4031 20130101; G05D 23/2039 20130101; G06F 3/0653 20130101;
G06F 13/1694 20130101; G11C 5/02 20130101; H03M 7/6023 20130101;
H04L 69/04 20130101; Y10S 901/01 20130101; G06F 13/42 20130101;
G06F 3/0625 20130101; G06F 3/0688 20130101; G06F 9/30036 20130101;
H04L 9/3247 20130101; H04L 41/5019 20130101; H04Q 2011/0037
20130101; H05K 7/1487 20130101; H05K 7/20709 20130101; H04Q 1/04
20130101; B25J 15/0014 20130101; G06F 2209/483 20130101; H04L 67/12
20130101; H04Q 11/0003 20130101; H04Q 11/0071 20130101; H04Q
2213/13523 20130101; H05K 5/0204 20130101; H05K 7/20736 20130101;
G06F 3/0616 20130101; G06F 3/0683 20130101; G06F 12/1408 20130101;
G11C 11/56 20130101; H04L 9/14 20130101; H04L 43/0876 20130101;
H04L 45/52 20130101; H04L 49/555 20130101; H05K 7/1485 20130101;
G02B 6/3893 20130101; G06F 8/65 20130101; G06F 9/505 20130101; G06F
2209/5022 20130101; H04L 67/02 20130101; G06F 3/0613 20130101; G06F
2212/1041 20130101; H03M 7/3084 20130101; H04L 47/782 20130101;
H04L 49/00 20130101; H04L 67/10 20130101; H04L 67/34 20130101; H05K
7/1492 20130101; H04Q 1/09 20130101; H05K 7/1489 20130101; G06F
3/067 20130101; G06F 9/4881 20130101; G06F 15/161 20130101; H04L
41/0896 20130101; H05K 7/1442 20130101; G06F 3/061 20130101; G06F
9/5044 20130101; H03M 7/4081 20130101; H04L 67/1008 20130101; G06F
9/5016 20130101; G06F 2212/1008 20130101; G06F 2212/152 20130101;
H04L 29/12009 20130101; H04L 47/805 20130101; H04L 43/0817
20130101; G06F 13/161 20130101; H04L 9/0643 20130101; H04L 43/065
20130101; H04L 47/24 20130101; H04Q 2213/13527 20130101; H05K
7/1422 20130101; G06F 3/0638 20130101; G06F 13/4282 20130101; G06Q
10/06314 20130101; H03M 7/4056 20130101; H04L 47/823 20130101; H05K
7/1447 20130101; Y02D 10/00 20180101; H05K 7/1461 20130101; G06F
2212/402 20130101; H04L 41/12 20130101; H04L 47/765 20130101; H05K
7/2039 20130101; G06F 3/064 20130101; G06F 12/10 20130101; H04L
12/2809 20130101; H04L 41/147 20130101; H04L 47/82 20130101; H04L
69/329 20130101; H04Q 2011/0086 20130101; H05K 2201/10159 20130101;
H05K 2201/10189 20130101; G06F 9/5072 20130101; G06F 13/4068
20130101; G11C 14/0009 20130101; H04L 45/02 20130101; H04L 49/357
20130101; H04L 67/1034 20130101; H04Q 11/0062 20130101; H05K 7/1421
20130101; B65G 1/0492 20130101; G06F 3/0665 20130101; G06F 12/0862
20130101; H04L 41/082 20130101; H04L 49/45 20130101; H04L 9/3263
20130101; H05K 7/20745 20130101; G05D 23/1921 20130101; G06F
13/1668 20130101; G06Q 10/087 20130101; H04B 10/25 20130101; H04Q
11/0005 20130101; H05K 7/1418 20130101; G06F 3/0631 20130101; G06F
3/0655 20130101; G06F 3/0658 20130101; G06F 9/3887 20130101; G06F
9/544 20130101; G06F 13/385 20130101; G06Q 10/06 20130101; H04L
49/25 20130101; H04L 67/1097 20130101; H04Q 2011/0073 20130101;
G06F 3/065 20130101; G06F 3/0679 20130101; G06F 12/109 20130101;
H04L 41/024 20130101; G06F 3/0611 20130101; G06F 13/409 20130101;
G06F 15/8061 20130101; H04L 49/15 20130101; H04Q 2011/0041
20130101; H04Q 2011/0079 20130101; G02B 6/4452 20130101; G11C 5/06
20130101; H03M 7/3086 20130101; H05K 1/0203 20130101; H05K 7/20727
20130101; G02B 6/3897 20130101; G06F 3/0647 20130101; G06F 9/4401
20130101; H03M 7/30 20130101; H04L 47/38 20130101; H04L 67/1012
20130101; H05K 7/1491 20130101; G06F 11/3414 20130101; G06F
2209/5019 20130101; G06F 3/0659 20130101; G06F 3/0664 20130101;
G06F 3/0673 20130101; G06F 13/4022 20130101; G06F 16/9014 20190101;
G06F 2212/202 20130101; G07C 5/008 20130101; H04L 43/08 20130101;
H04W 4/023 20130101; H05K 2201/066 20130101; H05K 1/181 20130101;
H04L 41/0813 20130101; H04Q 11/00 20130101; H04W 4/80 20180201;
H04L 67/1004 20130101; G06F 1/183 20130101; G06F 11/141 20130101;
H04L 67/16 20130101; Y02P 90/30 20151101; Y04S 10/50 20130101; G06F
1/20 20130101; G06F 3/0689 20130101; G06F 9/5027 20130101; G06F
12/0893 20130101; H04L 41/046 20130101; H04L 67/1029 20130101; G06Q
10/20 20130101; H03M 7/40 20130101; H03M 7/6005 20130101; H04L
43/0894 20130101; H05K 7/1498 20130101; H04L 41/145 20130101; H04L
43/16 20130101; H04L 67/306 20130101; H05K 7/20836 20130101; H05K
13/0486 20130101; G06Q 50/04 20130101; G06F 2212/1044 20130101 |
International
Class: |
H04L 29/08 20060101
H04L029/08; H04L 12/24 20060101 H04L012/24 |
Claims
1. An orchestrator server to utilize availability data for a set of
managed nodes to assign workloads, the orchestrator server
comprising: one or more processors; one or more memory devices
having stored therein a plurality of instructions that, when
executed by the one or more processors, cause the orchestrator
server to: assign workloads to the managed nodes; receive
availability data from the managed nodes, wherein the availability
data is indicative of a determination by each of the managed nodes
as to an availability of the managed node to receive an additional
workload; receive telemetry data from the managed nodes, wherein
the telemetry data is indicative of resource utilization by each of
the managed nodes as the workloads are performed; determine, as a
function of the availability data, a reduced set of available
managed nodes for analysis; determine, as a function of the
telemetry data, adjustments to the workload assignments to increase
the resource utilization among the reduced set of managed nodes;
and apply the determined adjustments to the reduced set of managed
nodes as the workloads are performed.
2. The orchestrator server of claim 1, wherein to assign the
workloads comprises to assign a priority to one or more of the
workloads.
3. The orchestrator server of claim 2, wherein to assign a priority
to one or more of the workloads comprises to assign a deterministic
execution priority to one or more of the workloads.
4. The orchestrator server of claim 1, wherein to assign the
workloads comprises to generate initial availability data as a
function of the assignment of the workloads.
5. The orchestrator server of claim 1, wherein to determine, as a
function of the telemetry data, adjustments to the workload
assignments comprises to generate, as a function of the telemetry
data, data analytics as the workloads are performed.
6. The orchestrator server of claim 5, wherein to generate data
analytics comprises to limit the generation of the data analytics
to the reduced set of managed nodes.
7. The orchestrator server of claim 5, wherein to generate data
analytics comprises to identify trends in resource utilization of
the workloads performed by the managed nodes in the reduced set of
managed nodes.
8. The orchestrator server of claim 5, wherein to generate data
analytics comprises to generate profiles of the workloads performed
by the managed nodes in the reduced set of managed nodes.
9. The orchestrator server of claim 5, wherein to generate data
analytics comprises to predict future resource utilization of the
workloads performed by the managed nodes in the reduced set of
managed nodes.
10. The orchestrator server of claim 1, wherein the plurality of
instructions, when executed by the one or more processors, further
the cause the orchestrator server to: obtain policy data indicative
of one or more goals to be achieved in the management of the
workloads; and modify the adjustments as a function of the policy
data.
11. The orchestrator server of claim 1, wherein to determine the
adjustments comprises to determine one or more node-specific
adjustments indicative of changes to an availability of one or more
resources of a managed node in the reduced set of managed nodes to
one or more of the workloads performed by the managed node.
12. The orchestrator server of claim 11, wherein to determine the
node-specific adjustments comprises to determine at least one of a
processor throttle adjustment, a memory usage adjustment, a network
bandwidth adjustment, or a fan speed adjustment.
13. One or more machine-readable storage media comprising a
plurality of instructions stored thereon that, in response to being
executed, cause an orchestrator server to: assign workloads to a
plurality of managed nodes; receive availability data from the
managed nodes, wherein the availability data is indicative of a
determination by each of the managed nodes as to an availability of
the managed node to receive an additional workload; receive
telemetry data from the managed nodes, wherein the telemetry data
is indicative of resource utilization by each of the managed nodes
as the workloads are performed; determine, as a function of the
availability data, a reduced set of available managed nodes for
analysis; determine, as a function of the telemetry data,
adjustments to the workload assignments to increase the resource
utilization among the reduced set of managed nodes; and apply the
determined adjustments to the reduced set of managed nodes as the
workloads are performed.
14. The one or more machine-readable storage media of claim 13,
wherein to assign the workloads comprises to assign a priority to
one or more of the workloads.
15. The one or more machine-readable storage media of claim 14,
wherein to assign a priority to one or more of the workloads
comprises to assign a deterministic execution priority to one or
more of the workloads.
16. The one or more machine-readable storage media of claim 13,
wherein to assign the workloads comprises to generate initial
availability data as a function of the assignment of the
workloads.
17. The one or more machine-readable storage media of claim 13,
wherein to determine, as a function of the telemetry data,
adjustments to the workload assignments comprises to generate, as a
function of the telemetry data, data analytics as the workloads are
performed.
18. The one or more machine-readable storage media of claim 17,
wherein to generate data analytics comprises to limit the
generation of the data analytics to the reduced set of managed
nodes.
19. The one or more machine-readable storage media of claim 17,
wherein to generate data analytics comprises to identify trends in
resource utilization of the workloads performed by the managed
nodes in the reduced set of managed nodes.
20. The one or more machine-readable storage media of claim 17,
wherein to generate data analytics comprises to generate profiles
of the workloads performed by the managed nodes in the reduced set
of managed nodes.
21. The one or more machine-readable storage media of claim 17,
wherein to generate data analytics comprises to predict future
resource utilization of the workloads performed by the managed
nodes in the reduced set of managed nodes.
22. The one or more machine-readable storage media of claim 13,
wherein the plurality of instructions, when executed, further the
cause the orchestrator server to: obtain policy data indicative of
one or more goals to be achieved in the management of the
workloads; and modify the adjustments as a function of the policy
data.
23. The one or more machine-readable storage media of claim 13,
wherein to determine the adjustments comprises to determine one or
more node-specific adjustments indicative of changes to an
availability of one or more resources of a managed node in the
reduced set of managed nodes to one or more of the workloads
performed by the managed node.
24. The one or more machine-readable storage media of claim 23,
wherein to determine the node-specific adjustments comprises to
determine at least one of a processor throttle adjustment, a memory
usage adjustment, a network bandwidth adjustment, or a fan speed
adjustment.
25. An orchestrator server to manage workloads among a plurality of
managed nodes coupled to a network, the orchestrator server
comprising: circuitry for assigning workloads to the managed nodes;
circuitry for receiving availability data from the managed nodes,
wherein the availability data is indicative of a determination by
each of the managed nodes as to an availability of the managed node
to receive an additional workload; circuitry for receiving
telemetry data from the managed nodes, wherein the telemetry data
is indicative of resource utilization by each of the managed nodes
as the workloads are performed; means for determining, as a
function of the availability data, a reduced set of available
managed nodes for analysis; means for determining, as a function of
the telemetry data, adjustments to the workload assignments to
increase the resource utilization among the reduced set of managed
nodes; and means for applying the determined adjustments to the
reduced set of managed nodes as the workloads are performed.
26. A method for utilizing availability data for a set of managed
nodes to assign workloads, the method comprising: assigning, by an
orchestrator server, workloads to the managed nodes; receiving, by
the orchestrator server, availability data from the managed nodes,
wherein the availability data is indicative of a determination by
each of the managed nodes as to an availability of the managed node
to receive an additional workload; receiving, by the orchestrator
server, telemetry data from the managed nodes, wherein the
telemetry data is indicative of resource utilization by each of the
managed nodes as the workloads are performed; determining, by the
orchestrator server and as a function of the availability data, a
reduced set of available managed nodes for analysis; determining,
by the orchestrator server and as a function of the telemetry data,
adjustments to the workload assignments to increase the resource
utilization among the reduced set of managed nodes; and applying,
by the orchestrator server, the determined adjustments to the
reduced set of managed nodes as the workloads are performed.
27. The method of claim 26, wherein assigning the workloads
comprises assigning a priority to one or more of the workloads.
28. The method of claim 27, wherein assigning a priority to one or
more of the workloads comprises assigning a deterministic execution
priority to one or more of the workloads.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims the benefit of U.S.
Provisional Patent Application No. 62/365,969, filed Jul. 22, 2016,
U.S. Provisional Patent Application No. 62/376859, filed Aug. 18,
2016, and U.S. Provisional Patent Application No. 62/427,268, filed
Nov. 29, 2016.
BACKGROUND
[0002] Typically, in a cloud based computing environment, at least
one server assigns workloads (e.g., processes, applications, or
other tasks) to one or more computing devices ("managed nodes") in
communication with the server through a network. Some of the
managed nodes may be highly occupied with executing workloads that
have already been assigned by the server, while others may be only
partially occupied or completely unoccupied. By assigning a
workload to a managed node that is already heavily loaded with
other workloads, the server may cause the managed node to be unable
to complete the execution of the assigned workloads in a timely and
predictable manner As a result, a customer receiving services from
the cloud computing environment may become dissatisfied with the
service. On the other hand, performing calculations to assess the
capacity of every managed node in the network to accept a workload
may be computationally intensive, especially when the cloud based
system includes tens of thousands of managed nodes.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The concepts described herein are illustrated by way of
example and not by way of limitation in the accompanying figures.
For simplicity and clarity of illustration, elements illustrated in
the figures are not necessarily drawn to scale. Where considered
appropriate, reference labels have been repeated among the figures
to indicate corresponding or analogous elements.
[0004] FIG. 1 is a diagram of a conceptual overview of a data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0005] FIG. 2 is a diagram of an example embodiment of a logical
configuration of a rack of the data center of FIG. 1;
[0006] FIG. 3 is a diagram of an example embodiment of another data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0007] FIG. 4 is a diagram of another example embodiment of a data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0008] FIG. 5 is a diagram of a connectivity scheme representative
of link-layer connectivity that may be established among various
sleds of the data centers of FIGS. 1, 3, and 4;
[0009] FIG. 6 is a diagram of a rack architecture that may be
representative of an architecture of any particular one of the
racks depicted in FIGS. 1-4 according to some embodiments;
[0010] FIG. 7 is a diagram of an example embodiment of a sled that
may be used with the rack architecture of FIGS. 6A and 6B;
[0011] FIG. 8 is a diagram of an example embodiment of a rack
architecture to provide support for sleds featuring expansion
capabilities;
[0012] FIG. 9 is a diagram of an example embodiment of a rack
implemented according to the rack architecture of FIG. 8;
[0013] FIG. 10 is a diagram of an example embodiment of a sled
designed for use in conjunction with the rack of FIG. 9;
[0014] FIG. 11 is a diagram of an example embodiment of a data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0015] FIG. 12 is a simplified block diagram of at least one
embodiment of a system for efficiently identifying managed nodes
available for workload assignments using availability data
generated by the managed nodes;
[0016] FIG. 13 is a simplified block diagram of at least one
embodiment of an orchestrator server of the system of FIG. 12;
[0017] FIG. 14 is a simplified block diagram of at least one
embodiment of an environment that may be established by the
orchestrator server of FIG. 12;
[0018] FIG. 15 is a simplified block diagram of at least one
embodiment of an environment that may be established by a managed
node of FIG. 12;
[0019] FIGS. 16-18 are a simplified flow diagram of at least one
embodiment of a method for managing workloads using availability
data generated by the managed nodes that may be performed by the
orchestrator server of FIGS. 12 and 14; and
[0020] FIGS. 19-21 are a simplified flow diagram of at least one
embodiment of a method for generating and reporting availability
data to assist in the management of workloads that may be performed
by a managed node of FIGS. 12 and 15.
DETAILED DESCRIPTION OF THE DRAWINGS
[0021] While the concepts of the present disclosure are susceptible
to various modifications and alternative forms, specific
embodiments thereof have been shown by way of example in the
drawings and will be described herein in detail. It should be
understood, however, that there is no intent to limit the concepts
of the present disclosure to the particular forms disclosed, but on
the contrary, the intention is to cover all modifications,
equivalents, and alternatives consistent with the present
disclosure and the appended claims.
[0022] References in the specification to "one embodiment," "an
embodiment," "an illustrative embodiment," etc., indicate that the
embodiment described may include a particular feature, structure,
or characteristic, but every embodiment may or may not necessarily
include that particular feature, structure, or characteristic.
Moreover, such phrases are not necessarily referring to the same
embodiment. Further, when a particular feature, structure, or
characteristic is described in connection with an embodiment, it is
submitted that it is within the knowledge of one skilled in the art
to effect such feature, structure, or characteristic in connection
with other embodiments whether or not explicitly described.
Additionally, it should be appreciated that items included in a
list in the form of "at least one A, B, and C" can mean (A); (B);
(C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly,
items listed in the form of "at least one of A, B, or C" can mean
(A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and
C).
[0023] The disclosed embodiments may be implemented, in some cases,
in hardware, firmware, software, or any combination thereof. The
disclosed embodiments may also be implemented as instructions
carried by or stored on a transitory or non-transitory
machine-readable (e.g., computer-readable) storage medium, which
may be read and executed by one or more processors. A
machine-readable storage medium may be embodied as any storage
device, mechanism, or other physical structure for storing or
transmitting information in a form readable by a machine (e.g., a
volatile or non-volatile memory, a media disc, or other media
device).
[0024] In the drawings, some structural or method features may be
shown in specific arrangements and/or orderings. However, it should
be appreciated that such specific arrangements and/or orderings may
not be required. Rather, in some embodiments, such features may be
arranged in a different manner and/or order than shown in the
illustrative figures. Additionally, the inclusion of a structural
or method feature in a particular figure is not meant to imply that
such feature is required in all embodiments and, in some
embodiments, may not be included or may be combined with other
features.
[0025] FIG. 1 illustrates a conceptual overview of a data center
100 that may generally be representative of a data center or other
type of computing network in/for which one or more techniques
described herein may be implemented according to various
embodiments. As shown in FIG. 1, data center 100 may generally
contain a plurality of racks, each of which may house computing
equipment comprising a respective set of physical resources. In the
particular non-limiting example depicted in FIG. 1, data center 100
contains four racks 102A to 102D, which house computing equipment
comprising respective sets of physical resources (PCRs) 105A to
105D. According to this example, a collective set of physical
resources 106 of data center 100 includes the various sets of
physical resources 105A to 105D that are distributed among racks
102A to 102D. Physical resources 106 may include resources of
multiple types, such as--for example--processors, co-processors,
accelerators, field-programmable gate arrays (FPGAs), memory, and
storage. The embodiments are not limited to these examples.
[0026] The illustrative data center 100 differs from typical data
centers in many ways. For example, in the illustrative embodiment,
the circuit boards ("sleds") on which components such as CPUs,
memory, and other components are placed are designed for increased
thermal performance In particular, in the illustrative embodiment,
the sleds are shallower than typical boards. In other words, the
sleds are shorter from the front to the back, where cooling fans
are located. This decreases the length of the path that air must to
travel across the components on the board. Further, the components
on the sled are spaced further apart than in typical circuit
boards, and the components are arranged to reduce or eliminate
shadowing (i.e., one component in the air flow path of another
component). In the illustrative embodiment, processing components
such as the processors are located on a top side of a sled while
near memory, such as DIMMs, are located on a bottom side of the
sled. As a result of the enhanced airflow provided by this design,
the components may operate at higher frequencies and power levels
than in typical systems, thereby increasing performance
Furthermore, the sleds are configured to blindly mate with power
and data communication cables in each rack 102A, 102B, 102C, 102D,
enhancing their ability to be quickly removed, upgraded,
reinstalled, and/or replaced. Similarly, individual components
located on the sleds, such as processors, accelerators, memory, and
data storage drives, are configured to be easily upgraded due to
their increased spacing from each other. In the illustrative
embodiment, the components additionally include hardware
attestation features to prove their authenticity.
[0027] Furthermore, in the illustrative embodiment, the data center
100 utilizes a single network architecture ("fabric") that supports
multiple other network architectures including Ethernet and
Omni-Path. The sleds, in the illustrative embodiment, are coupled
to switches via optical fibers, which provide higher bandwidth and
lower latency than typical twister pair cabling (e.g., Category 5,
Category 5e, Category 6, etc.). Due to the high bandwidth, low
latency interconnections and network architecture, the data center
100 may, in use, pool resources, such as memory, accelerators
(e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage
drives that are physically disaggregated, and provide them to
compute resources (e.g., processors) on an as needed basis,
enabling the compute resources to access the pooled resources as if
they were local. The illustrative data center 100 additionally
receives usage information for the various resources, predicts
resource usage for different types of workloads based on past
resource usage, and dynamically reallocates the resources based on
this information.
[0028] The racks 102A, 102B, 102C, 102D of the data center 100 may
include physical design features that facilitate the automation of
a variety of types of maintenance tasks. For example, data center
100 may be implemented using racks that are designed to be
robotically-accessed, and to accept and house
robotically-manipulatable resource sleds. Furthermore, in the
illustrative embodiment, the racks 102A, 102B, 102C, 102D include
integrated power sources that receive a greater voltage than is
typical for power sources. The increased voltage enables the power
sources to provide additional power to the components on each sled,
enabling the components to operate at higher than typical
frequencies.
[0029] FIG. 2 illustrates an exemplary logical configuration of a
rack 202 of the data center 100. As shown in FIG. 2, rack 202 may
generally house a plurality of sleds, each of which may comprise a
respective set of physical resources. In the particular
non-limiting example depicted in FIG. 2, rack 202 houses sleds
204-1 to 204-4 comprising respective sets of physical resources
205-1 to 205-4, each of which constitutes a portion of the
collective set of physical resources 206 comprised in rack 202.
With respect to FIG. 1, if rack 202 is representative of--for
example--rack 102A, then physical resources 206 may correspond to
the physical resources 105A comprised in rack 102A. In the context
of this example, physical resources 105A may thus be made up of the
respective sets of physical resources, including physical storage
resources 205-1, physical accelerator resources 205-2, physical
memory resources 204-3, and physical compute resources 205-5
comprised in the sleds 204-1 to 204-4 of rack 202. The embodiments
are not limited to this example. Each sled may contain a pool of
each of the various types of physical resources (e.g., compute,
memory, accelerator, storage). By having robotically accessible and
robotically manipulatable sleds comprising disaggregated resources,
each type of resource can be upgraded independently of each other
and at their own optimized refresh rate.
[0030] FIG. 3 illustrates an example of a data center 300 that may
generally be representative of one in/for which one or more
techniques described herein may be implemented according to various
embodiments. In the particular non-limiting example depicted in
FIG. 3, data center 300 comprises racks 302-1 to 302-32. In various
embodiments, the racks of data center 300 may be arranged in such
fashion as to define and/or accommodate various access pathways.
For example, as shown in FIG. 3, the racks of data center 300 may
be arranged in such fashion as to define and/or accommodate access
pathways 311A, 311B, 311C, and 311D. In some embodiments, the
presence of such access pathways may generally enable automated
maintenance equipment, such as robotic maintenance equipment, to
physically access the computing equipment housed in the various
racks of data center 300 and perform automated maintenance tasks
(e.g., replace a failed sled, upgrade a sled). In various
embodiments, the dimensions of access pathways 311A, 311B, 311C,
and 311D, the dimensions of racks 302-1 to 302-32, and/or one or
more other aspects of the physical layout of data center 300 may be
selected to facilitate such automated operations. The embodiments
are not limited in this context.
[0031] FIG. 4 illustrates an example of a data center 400 that may
generally be representative of one in/for which one or more
techniques described herein may be implemented according to various
embodiments. As shown in FIG. 4, data center 400 may feature an
optical fabric 412. Optical fabric 412 may generally comprise a
combination of optical signaling media (such as optical cabling)
and optical switching infrastructure via which any particular sled
in data center 400 can send signals to (and receive signals from)
each of the other sleds in data center 400. The signaling
connectivity that optical fabric 412 provides to any given sled may
include connectivity both to other sleds in a same rack and sleds
in other racks. In the particular non-limiting example depicted in
FIG. 4, data center 400 includes four racks 402A to 402D. Racks
402A to 402D house respective pairs of sleds 404A-1 and 404A-2,
404B-1 and 404B-2, 404C-1 and 404C-2, and 404D-1 and 404D-2. Thus,
in this example, data center 400 comprises a total of eight sleds.
Via optical fabric 412, each such sled may possess signaling
connectivity with each of the seven other sleds in data center 400.
For example, via optical fabric 412, sled 404A-1 in rack 402A may
possess signaling connectivity with sled 404A-2 in rack 402A, as
well as the six other sleds 404B-1, 404B-2, 404C-1, 404C-2, 404D-1,
and 404D-2 that are distributed among the other racks 402B, 402C,
and 402D of data center 400. The embodiments are not limited to
this example.
[0032] FIG. 5 illustrates an overview of a connectivity scheme 500
that may generally be representative of link-layer connectivity
that may be established in some embodiments among the various sleds
of a data center, such as any of example data centers 100, 300, and
400 of FIGS. 1, 3, and 4. Connectivity scheme 500 may be
implemented using an optical fabric that features a dual-mode
optical switching infrastructure 514. Dual-mode optical switching
infrastructure 514 may generally comprise a switching
infrastructure that is capable of receiving communications
according to multiple link-layer protocols via a same unified set
of optical signaling media, and properly switching such
communications. In various embodiments, dual-mode optical switching
infrastructure 514 may be implemented using one or more dual-mode
optical switches 515. In various embodiments, dual-mode optical
switches 515 may generally comprise high-radix switches. In some
embodiments, dual-mode optical switches 515 may comprise multi-ply
switches, such as four-ply switches. In various embodiments,
dual-mode optical switches 515 may feature integrated silicon
photonics that enable them to switch communications with
significantly reduced latency in comparison to conventional
switching devices. In some embodiments, dual-mode optical switches
515 may constitute leaf switches 530 in a leaf-spine architecture
additionally including one or more dual-mode optical spine switches
520.
[0033] In various embodiments, dual-mode optical switches may be
capable of receiving both Ethernet protocol communications carrying
Internet Protocol (IP packets) and communications according to a
second, high-performance computing (HPC) link-layer protocol (e.g.,
Intel's Omni-Path Architecture's, Infiniband) via optical signaling
media of an optical fabric. As reflected in FIG. 5, with respect to
any particular pair of sleds 504A and 504B possessing optical
signaling connectivity to the optical fabric, connectivity scheme
500 may thus provide support for link-layer connectivity via both
Ethernet links and HPC links. Thus, both Ethernet and HPC
communications can be supported by a single high-bandwidth,
low-latency switch fabric. The embodiments are not limited to this
example.
[0034] FIG. 6 illustrates a general overview of a rack architecture
600 that may be representative of an architecture of any particular
one of the racks depicted in FIGS. 1 to 4 according to some
embodiments. As reflected in FIG. 6, rack architecture 600 may
generally feature a plurality of sled spaces into which sleds may
be inserted, each of which may be robotically-accessible via a rack
access region 601. In the particular non-limiting example depicted
in FIG. 6, rack architecture 600 features five sled spaces 603-1 to
603-5. Sled spaces 603-1 to 603-5 feature respective multi-purpose
connector modules (MPCMs) 616-1 to 616-5.
[0035] FIG. 7 illustrates an example of a sled 704 that may be
representative of a sled of such a type. As shown in FIG. 7, sled
704 may comprise a set of physical resources 705, as well as an
MPCM 716 designed to couple with a counterpart MPCM when sled 704
is inserted into a sled space such as any of sled spaces 603-1 to
603-5 of FIG. 6. Sled 704 may also feature an expansion connector
717. Expansion connector 717 may generally comprise a socket, slot,
or other type of connection element that is capable of accepting
one or more types of expansion modules, such as an expansion sled
718. By coupling with a counterpart connector on expansion sled
718, expansion connector 717 may provide physical resources 705
with access to supplemental computing resources 705B residing on
expansion sled 718. The embodiments are not limited in this
context.
[0036] FIG. 8 illustrates an example of a rack architecture 800
that may be representative of a rack architecture that may be
implemented in order to provide support for sleds featuring
expansion capabilities, such as sled 704 of FIG. 7. In the
particular non-limiting example depicted in FIG. 8, rack
architecture 800 includes seven sled spaces 803-1 to 803-7, which
feature respective MPCMs 816-1 to 816-7. Sled spaces 803-1 to 803-7
include respective primary regions 803-1A to 803-7A and respective
expansion regions 803-1B to 803-7B. With respect to each such sled
space, when the corresponding MPCM is coupled with a counterpart
MPCM of an inserted sled, the primary region may generally
constitute a region of the sled space that physically accommodates
the inserted sled. The expansion region may generally constitute a
region of the sled space that can physically accommodate an
expansion module, such as expansion sled 718 of FIG. 7, in the
event that the inserted sled is configured with such a module.
[0037] FIG. 9 illustrates an example of a rack 902 that may be
representative of a rack implemented according to rack architecture
800 of FIG. 8 according to some embodiments. In the particular
non-limiting example depicted in FIG. 9, rack 902 features seven
sled spaces 903-1 to 903-7, which include respective primary
regions 903-1A to 903-7A and respective expansion regions 903-1B to
903-7B. In various embodiments, temperature control in rack 902 may
be implemented using an air cooling system. For example, as
reflected in FIG. 9, rack 902 may feature a plurality of fans 919
that are generally arranged to provide air cooling within the
various sled spaces 903-1 to 903-7. In some embodiments, the height
of the sled space is greater than the conventional "1U" server
height. In such embodiments, fans 919 may generally comprise
relatively slow, large diameter cooling fans as compared to fans
used in conventional rack configurations. Running larger diameter
cooling fans at lower speeds may increase fan lifetime relative to
smaller diameter cooling fans running at higher speeds while still
providing the same amount of cooling. The sleds are physically
shallower than conventional rack dimensions. Further, components
are arranged on each sled to reduce thermal shadowing (i.e., not
arranged serially in the direction of air flow). As a result, the
wider, shallower sleds allow for an increase in device performance
because the devices can be operated at a higher thermal envelope
(e.g., 250 W) due to improved cooling (i.e., no thermal shadowing,
more space between devices, more room for larger heat sinks,
etc.).
[0038] MPCMs 916-1 to 916-7 may be configured to provide inserted
sleds with access to power sourced by respective power modules
920-1 to 920-7, each of which may draw power from an external power
source 921. In various embodiments, external power source 921 may
deliver alternating current (AC) power to rack 902, and power
modules 920-1 to 920-7 may be configured to convert such AC power
to direct current (DC) power to be sourced to inserted sleds. In
some embodiments, for example, power modules 920-1 to 920-7 may be
configured to convert 277-volt AC power into 12-volt DC power for
provision to inserted sleds via respective MPCMs 916-1 to 916-7.
The embodiments are not limited to this example.
[0039] MPCMs 916-1 to 916-7 may also be arranged to provide
inserted sleds with optical signaling connectivity to a dual-mode
optical switching infrastructure 914, which may be the same as--or
similar to--dual-mode optical switching infrastructure 514 of FIG.
5. In various embodiments, optical connectors contained in MPCMs
916-1 to 916-7 may be designed to couple with counterpart optical
connectors contained in MPCMs of inserted sleds to provide such
sleds with optical signaling connectivity to dual-mode optical
switching infrastructure 914 via respective lengths of optical
cabling 922-1 to 922-7. In some embodiments, each such length of
optical cabling may extend from its corresponding MPCM to an
optical interconnect loom 923 that is external to the sled spaces
of rack 902. In various embodiments, optical interconnect loom 923
may be arranged to pass through a support post or other type of
load-bearing element of rack 902. The embodiments are not limited
in this context. Because inserted sleds connect to an optical
switching infrastructure via MPCMs, the resources typically spent
in manually configuring the rack cabling to accommodate a newly
inserted sled can be saved.
[0040] FIG. 10 illustrates an example of a sled 1004 that may be
representative of a sled designed for use in conjunction with rack
902 of FIG. 9 according to some embodiments. Sled 1004 may feature
an MPCM 1016 that comprises an optical connector 1016A and a power
connector 1016B, and that is designed to couple with a counterpart
MPCM of a sled space in conjunction with insertion of MPCM 1016
into that sled space. Coupling MPCM 1016 with such a counterpart
MPCM may cause power connector 1016 to couple with a power
connector comprised in the counterpart MPCM. This may generally
enable physical resources 1005 of sled 1004 to source power from an
external source, via power connector 1016 and power transmission
media 1024 that conductively couples power connector 1016 to
physical resources 1005.
[0041] Sled 1004 may also include dual-mode optical network
interface circuitry 1026. Dual-mode optical network interface
circuitry 1026 may generally comprise circuitry that is capable of
communicating over optical signaling media according to each of
multiple link-layer protocols supported by dual-mode optical
switching infrastructure 914 of FIG. 9. In some embodiments,
dual-mode optical network interface circuitry 1026 may be capable
both of Ethernet protocol communications and of communications
according to a second, high-performance protocol. In various
embodiments, dual-mode optical network interface circuitry 1026 may
include one or more optical transceiver modules 1027, each of which
may be capable of transmitting and receiving optical signals over
each of one or more optical channels. The embodiments are not
limited in this context.
[0042] Coupling MPCM 1016 with a counterpart MPCM of a sled space
in a given rack may cause optical connector 1016A to couple with an
optical connector comprised in the counterpart MPCM. This may
generally establish optical connectivity between optical cabling of
the sled and dual-mode optical network interface circuitry 1026,
via each of a set of optical channels 1025. Dual-mode optical
network interface circuitry 1026 may communicate with the physical
resources 1005 of sled 1004 via electrical signaling media 1028. In
addition to the dimensions of the sleds and arrangement of
components on the sleds to provide improved cooling and enable
operation at a relatively higher thermal envelope (e.g., 250 W), as
described above with reference to FIG. 9, in some embodiments, a
sled may include one or more additional features to facilitate air
cooling, such as a heatpipe and/or heat sinks arranged to dissipate
heat generated by physical resources 1005. It is worthy of note
that although the example sled 1004 depicted in FIG. 10 does not
feature an expansion connector, any given sled that features the
design elements of sled 1004 may also feature an expansion
connector according to some embodiments. The embodiments are not
limited in this context.
[0043] FIG. 11 illustrates an example of a data center 1100 that
may generally be representative of one in/for which one or more
techniques described herein may be implemented according to various
embodiments. As reflected in FIG. 11, a physical infrastructure
management framework 1150A may be implemented to facilitate
management of a physical infrastructure 1100A of data center 1100.
In various embodiments, one function of physical infrastructure
management framework 1150A may be to manage automated maintenance
functions within data center 1100, such as the use of robotic
maintenance equipment to service computing equipment within
physical infrastructure 1100A. In some embodiments, physical
infrastructure 1100A may feature an advanced telemetry system that
performs telemetry reporting that is sufficiently robust to support
remote automated management of physical infrastructure 1100A. In
various embodiments, telemetry information provided by such an
advanced telemetry system may support features such as failure
prediction/prevention capabilities and capacity planning
capabilities. In some embodiments, physical infrastructure
management framework 1150A may also be configured to manage
authentication of physical infrastructure components using hardware
attestation techniques. For example, robots may verify the
authenticity of components before installation by analyzing
information collected from a radio frequency identification (RFID)
tag associated with each component to be installed. The embodiments
are not limited in this context.
[0044] As shown in FIG. 11, the physical infrastructure 1100A of
data center 1100 may comprise an optical fabric 1112, which may
include a dual-mode optical switching infrastructure 1114. Optical
fabric 1112 and dual-mode optical switching infrastructure 1114 may
be the same as--or similar to--optical fabric 412 of FIG. 4 and
dual-mode optical switching infrastructure 514 of FIG. 5,
respectively, and may provide high-bandwidth, low-latency,
multi-protocol connectivity among sleds of data center 1100. As
discussed above, with reference to FIG. 1, in various embodiments,
the availability of such connectivity may make it feasible to
disaggregate and dynamically pool resources such as accelerators,
memory, and storage. In some embodiments, for example, one or more
pooled accelerator sleds 1130 may be included among the physical
infrastructure 1100A of data center 1100, each of which may
comprise a pool of accelerator resources--such as co-processors
and/or FPGAs, for example--that is globally accessible to other
sleds via optical fabric 1112 and dual-mode optical switching
infrastructure 1114.
[0045] In another example, in various embodiments, one or more
pooled storage sleds 1132 may be included among the physical
infrastructure 1100A of data center 1100, each of which may
comprise a pool of storage resources that is available globally
accessible to other sleds via optical fabric 1112 and dual-mode
optical switching infrastructure 1114. In some embodiments, such
pooled storage sleds 1132 may comprise pools of solid-state storage
devices such as solid-state drives (SSDs). In various embodiments,
one or more high-performance processing sleds 1134 may be included
among the physical infrastructure 1100A of data center 1100. In
some embodiments, high-performance processing sleds 1134 may
comprise pools of high-performance processors, as well as cooling
features that enhance air cooling to yield a higher thermal
envelope of up to 250 W or more. In various embodiments, any given
high-performance processing sled 1134 may feature an expansion
connector 1117 that can accept a far memory expansion sled, such
that the far memory that is locally available to that
high-performance processing sled 1134 is disaggregated from the
processors and near memory comprised on that sled. In some
embodiments, such a high-performance processing sled 1134 may be
configured with far memory using an expansion sled that comprises
low-latency SSD storage. The optical infrastructure allows for
compute resources on one sled to utilize remote accelerator/FPGA,
memory, and/or SSD resources that are disaggregated on a sled
located on the same rack or any other rack in the data center. The
remote resources can be located one switch jump away or two-switch
jumps away in the spine-leaf network architecture described above
with reference to FIG. 5. The embodiments are not limited in this
context.
[0046] In various embodiments, one or more layers of abstraction
may be applied to the physical resources of physical infrastructure
1100A in order to define a virtual infrastructure, such as a
software-defined infrastructure 1100B. In some embodiments, virtual
computing resources 1136 of software-defined infrastructure 1100B
may be allocated to support the provision of cloud services 1140.
In various embodiments, particular sets of virtual computing
resources 1136 may be grouped for provision to cloud services 1140
in the form of SDI services 1138. Examples of cloud services 1140
may include--without limitation--software as a service (SaaS)
services 1142, platform as a service (PaaS) services 1144, and
infrastructure as a service (IaaS) services 1146.
[0047] In some embodiments, management of software-defined
infrastructure 1100B may be conducted using a virtual
infrastructure management framework 1150B. In various embodiments,
virtual infrastructure management framework 1150B may be designed
to implement workload fingerprinting techniques and/or
machine-learning techniques in conjunction with managing allocation
of virtual computing resources 1136 and/or SDI services 1138 to
cloud services 1140. In some embodiments, virtual infrastructure
management framework 1150B may use/consult telemetry data in
conjunction with performing such resource allocation. In various
embodiments, an application/service management framework 1150C may
be implemented in order to provide QoS management capabilities for
cloud services 1140. The embodiments are not limited in this
context.
[0048] As shown in FIG. 12, an illustrative system 1210 for
efficiently identifying managed nodes 1260 available for workload
assignments includes an orchestrator server 1240 in communication
with a set of managed nodes 1260. Each managed node 1260 may be
embodied as an assembly of resources (e.g., physical resources
206), such as compute resources (e.g., physical compute resources
205-4), storage resources (e.g., physical storage resources 205-1),
accelerator resources (e.g., physical accelerator resources 205-2),
or other resources (e.g., physical memory resources 205-3) from the
same or different sleds (e.g., the sleds 204-1, 204-2, 204-3,
204-4, etc.) or racks (e.g., one or more of racks 302-1 through
302-32). Each managed node 1260 may be established, defined, or
"spun up" by the orchestrator server 1240 at the time a workload is
to be assigned to the managed node 1260 or at any other time, and
may exist regardless of whether any workloads are presently
assigned to the managed node 1260. In the illustrative embodiment,
the set of managed nodes 1260 includes managed nodes 1250, 1252,
and 154. While three managed nodes 1260 are shown for simplicity,
it should be understood that, in the illustrative embodiment the
set includes many more managed nodes 1260 (e.g., tens of thousands
of managed nodes 1260). The system 1210 may be located in a data
center and provide storage and compute services (e.g., cloud
services) to a client device 1220 that is in communication with the
system 1210 through a network 1230. The orchestrator server 1240
may support a cloud operating environment, such as OpenStack, and
the managed nodes 1260 may execute one or more applications or
processes (i.e., workloads), such as in virtual machines or
containers, on behalf of a user of the client device 1220. As
discussed in more detail herein, the orchestrator server 1240, in
operation, is configured to receive availability data from each
managed node 1260. The availability data may be embodied as any
data indicative of the ability of the corresponding managed node to
receive and execute a workload in addition to any workloads the
managed node 1260 is presently executing. After receiving the
availability data, which is generated by the managed nodes 1260,
the orchestrator server 1240 performs analytics to determine how to
assign or reassign workloads among the managed nodes 1260 that
reported themselves as being available in the availability data. As
such, in the illustrative embodiment, the orchestrator server 1240
focuses the data analytics for determining workload assignments and
reassignments to the limited set of available managed nodes 1260,
thereby enabling the orchestrator server 1240 to operate more
efficiently.
[0049] Each managed node 1260, in the illustrative embodiment,
continually performs a self-evaluation as the managed node 1260
executes one or more workloads to determine whether the managed
node 1260 is able to take on an additional workload. In doing so,
each managed node 1260 generates telemetry data indicative of
performance and conditions (e.g., resource utilization, one or more
temperatures, fan speeds, etc.) as the managed node 1260 executes
one or more workloads and compares the telemetry data to predefined
thresholds. If the values in the telemetry data satisfy the
thresholds (e.g., a present processor utilization is less than a
predefined threshold processor utilization), the managed node 1260
determines that it is available for an additional workload.
Otherwise, the managed node 1260 determines that it is unavailable
for an additional workload. In the illustrative embodiment, the
predefined thresholds may vary, depending on whether the managed
node 1260 has been assigned a workload that is to be executed with
deterministic (i.e., predictable) performance (e.g., high priority)
rather than a normal priority. As such, when executing a workload
that has been designated to be executed deterministically, the
processor utilization threshold may be a lower value (e.g., 70%)
than the processor utilization threshold (e.g., 80%) if the managed
node 1260 is executing workloads that do not have high priority.
Furthermore, the managed nodes 1260 may communicate with each other
to collect availability data from other managed nodes 1260, such as
with a bee foraging algorithm, to identify the managed nodes 1260
available to receive additional workloads, rather than each managed
node 1260 independently reporting its availability data directly to
the orchestrator server 1240.
[0050] Referring now to FIG. 13, the orchestrator server 1240 may
be embodied as any type of compute device capable of performing the
functions described herein, including issuing a request to have
cloud services performed, receiving results of the cloud services,
assigning workloads to managed nodes 1260, analyzing telemetry data
indicative of performance and conditions (e.g., resource
utilization, one or more temperatures, fan speeds, etc.) as the
workloads are executed, and adjusting the assignments of the
workloads to increase resource utilization as the workloads are
performed. For example, the orchestrator server 1240 may be
embodied as a computer, a distributed computing system, one or more
sleds (e.g., the sleds 204-1, 204-2, 204-3, 204-4, etc.), a server
(e.g., stand-alone, rack-mounted, blade, etc.), a multiprocessor
system, a network appliance (e.g., physical or virtual), a desktop
computer, a workstation, a laptop computer, a notebook computer, a
processor-based system, or a network appliance. As shown in FIG.
13, the illustrative orchestrator server 1240 includes a central
processing unit (CPU) 1302, a main memory 1304, an input/output
(I/O) subsystem 1306, communication circuitry 1308, and one or more
data storage devices 1312. Of course, in other embodiments, the
orchestrator server 1240 may include other or additional
components, such as those commonly found in a computer (e.g.,
display, peripheral devices, etc.). Additionally, in some
embodiments, one or more of the illustrative components may be
incorporated in, or otherwise form a portion of, another component.
For example, in some embodiments, the main memory 1304, or portions
thereof, may be incorporated in the CPU 1302.
[0051] The CPU 1302 may be embodied as any type of processor
capable of performing the functions described herein. The CPU 1302
may be embodied as a single or multi-core processor(s), a
microcontroller, or other processor or processing/controlling
circuit. In some embodiments, the CPU 1302 may be embodied as,
include, or be coupled to a field programmable gate array (FPGA),
an application specific integrated circuit (ASIC), reconfigurable
hardware or hardware circuitry, or other specialized hardware to
facilitate performance of the functions described herein. As
discussed above, the managed node 1260 may include resources
distributed across multiple sleds and in such embodiments, the CPU
1302 may include portions thereof located on the same sled or
different sled. Similarly, the main memory 1304 may be embodied as
any type of volatile (e.g., dynamic random access memory (DRAM),
etc.) or non-volatile memory or data storage capable of performing
the functions described herein. In some embodiments, all or a
portion of the main memory 1304 may be integrated into the CPU
1302. In operation, the main memory 1304 may store various software
and data used during operation such as availability data, telemetry
data, policy data, workload labels, workload classifications,
workload adjustment data, operating systems, applications,
programs, libraries, and drivers. As discussed above, the managed
node 1260 may include resources distributed across multiple sleds
and in such embodiments, the main memory 1304 may include portions
thereof located on the same sled or different sled.
[0052] The I/O subsystem 1306 may be embodied as circuitry and/or
components to facilitate input/output operations with the CPU 1302,
the main memory 1304, and other components of the orchestrator
server 1240. For example, the I/O subsystem 1306 may be embodied
as, or otherwise include, memory controller hubs, input/output
control hubs, integrated sensor hubs, firmware devices,
communication links (e.g., point-to-point links, bus links, wires,
cables, light guides, printed circuit board traces, etc.), and/or
other components and subsystems to facilitate the input/output
operations. In some embodiments, the I/O subsystem 1306 may form a
portion of a system-on-a-chip (SoC) and be incorporated, along with
one or more of the CPU 1302, the main memory 1304, and other
components of the orchestrator server 1240, on a single integrated
circuit chip.
[0053] The communication circuitry 1308 may be embodied as any
communication circuit, device, or collection thereof, capable of
enabling communications over the network 1230 between the
orchestrator server 1240 and another compute device (e.g., the
client device 1220 and/or the managed nodes 1260). The
communication circuitry 1308 may be configured to use any one or
more communication technology (e.g., wired or wireless
communications) and associated protocols (e.g., Ethernet,
Bluetooth.RTM., Wi-Fi.RTM., WiMAX, etc.) to effect such
communication.
[0054] The illustrative communication circuitry 1308 includes a
network interface controller (NIC) 1310, which may also be referred
to as a host fabric interface (HFI). The NIC 1310 may be embodied
as one or more add-in-boards, daughtercards, network interface
cards, controller chips, chipsets, or other devices that may be
used by the orchestrator server 1240 to connect with another
compute device (e.g., a managed node 1260 or the client device
1220). In some embodiments, the NIC 1310 may be embodied as part of
a system-on-a-chip (SoC) that includes one or more processors, or
included on a multichip package that also contains one or more
processors. In some embodiments, the NIC 1310 may include a local
processor (not shown) and/or a local memory (not shown) that are
both local to the NIC 1310. In such embodiments, the local
processor of the NIC 1310 may be capable of performing one or more
of the functions of the CPU 1302 described herein. Additionally or
alternatively, in such embodiments, the local memory of the NIC
1310 may be integrated into one or more components of the
orchestrator server 1240 at the board level, socket level, chip
level, and/or other levels. As discussed above, the managed node
1260 may include resources distributed across multiple sleds and in
such embodiments, the communication circuitry 1308 may include
portions thereof located on the same sled or different sled.
[0055] The one or more illustrative data storage devices 1312, may
be embodied as any type of devices configured for short-term or
long-term storage of data such as, for example, memory devices and
circuits, memory cards, hard disk drives, solid-state drives, or
other data storage devices. Each data storage device 1312 may
include a system partition that stores data and firmware code for
the data storage device 1312. Each data storage device 1312 may
also include an operating system partition that stores data files
and executables for an operating system.
[0056] Additionally, the orchestrator server 1240 may include a
display 1314. The display 1314 may be embodied as, or otherwise
use, any suitable display technology including, for example, a
liquid crystal display (LCD), a light emitting diode (LED) display,
a cathode ray tube (CRT) display, a plasma display, and/or other
display usable in a compute device. The display 1314 may include a
touchscreen sensor that uses any suitable touchscreen input
technology to detect the user's tactile selection of information
displayed on the display including, but not limited to, resistive
touchscreen sensors, capacitive touchscreen sensors, surface
acoustic wave (SAW) touchscreen sensors, infrared touchscreen
sensors, optical imaging touchscreen sensors, acoustic touchscreen
sensors, and/or other type of touchscreen sensors.
[0057] Additionally or alternatively, the orchestrator server 1240
may include one or more peripheral devices 1316. Such peripheral
devices 1316 may include any type of peripheral device commonly
found in a compute device such as speakers, a mouse, a keyboard,
and/or other input/output devices, interface devices, and/or other
peripheral devices.
[0058] The client device 1220 and the managed nodes 1260 may have
components similar to those described in FIG. 13. The description
of those components of the orchestrator server 1240 is equally
applicable to the description of components of the client device
1220 and the managed nodes 1260 and is not repeated herein for
clarity of the description. Further, it should be appreciated that
any of the client device 1220 and the managed nodes 1260 may
include other components, sub-components, and devices commonly
found in a computing device, which are not discussed above in
reference to the orchestrator server 1240 and not discussed herein
for clarity of the description.
[0059] As described above, the client device 1220, the orchestrator
server 1240 and the managed nodes 1260 are illustratively in
communication via the network 1230, which may be embodied as any
type of wired or wireless communication network, including global
networks (e.g., the Internet), local area networks (LANs) or wide
area networks (WANs), cellular networks (e.g., Global System for
Mobile Communications (GSM), 3G, Long Term Evolution (LTE),
Worldwide Interoperability for Microwave Access (WiMAX), etc.),
digital subscriber line (DSL) networks, cable networks (e.g.,
coaxial networks, fiber networks, etc.), or any combination
thereof.
[0060] Referring now to FIG. 14, in the illustrative embodiment,
the orchestrator server 1240 may establish an environment 1400
during operation. The illustrative environment 1400 includes a
network communicator 1420, a telemetry monitor 1430, a policy
manager 1440, and a resource manager 1450. Each of the components
of the environment 1400 may be embodied as hardware, firmware,
software, or a combination thereof. As such, in some embodiments,
one or more of the components of the environment 1400 may be
embodied as circuitry or a collection of electrical devices (e.g.,
network communicator circuitry 1420, telemetry monitor circuitry
1430, policy manager circuitry 1440, resource manager circuitry
1450, etc.). It should be appreciated that, in such embodiments,
one or more of the network communicator circuitry 1420, telemetry
monitor circuitry 1430, policy manager circuitry 1440, or resource
manager circuitry 1450 may form a portion of one or more of the CPU
1302, the main memory 1304, the I/O subsystem 1306, and/or other
components of the orchestrator server 1240. In the illustrative
embodiment, the environment 1400 includes telemetry data 1402 which
may be embodied as data indicative of the performance and
conditions (e.g., resource utilization, one or more temperatures,
fan speeds, etc.) of each managed node 1260 as the managed nodes
1260 execute the workloads assigned to them.
[0061] Additionally, the illustrative environment 1400 includes
policy data 1404 indicative of user-defined preferences as to the
heat production, power consumption, and life expectancy of the
components of the managed nodes 1260. Further, the illustrative
environment 1400 includes workload labels 1406 which may be
embodied as any identifiers (e.g., process numbers, executable file
names, alphanumeric tags, etc.) that uniquely identify each
workload executed by the managed nodes 1260. In addition, the
illustrative environment 1400 includes workload classifications
1408 which may be embodied as any data indicative of the resource
utilization tendencies of each workload (e.g., processor intensive,
network bandwidth intensive, etc.). Further, the illustrative
environment 1400 includes workload adjustment data 1410 which may
be embodied as any data indicative of reassignments (e.g., live
migrations) of one or more workloads from one managed node 1260 to
another managed node 1260 and/or adjustments to settings for
components within each managed node 1260, such as processor
capacity (e.g., a number of cores to be used, a clock speed, a
percentage of available processor cycles, etc.) available to one or
more workloads, memory resource capacity (e.g., amount of memory to
be used and/or frequency of memory accesses to volatile memory
and/or non-volatile memory) available to one or more workloads,
and/or communication circuitry capacity available to one or more
workloads. The illustrative embodiment additionally includes
availability data 1412, which may be embodied as any data
indicative of a determination made by each of the managed nodes
1260 as to whether the managed node 1260 is able to receive and
execute another workload. In the illustrative embodiment, the
orchestrator server 1240 continually receives updated availability
data 1412 such that a particular managed node 1260 that initially
reported an unavailability to take on an additional workload may
later report that it is able to execute an additional workload. As
described herein, the managed nodes 1260 that reported an
availability to perform additional workloads form a "short list" of
managed nodes 1260 to be analyzed in more detail by the
orchestrator server 1240.
[0062] In the illustrative environment 1400, the network
communicator 1420, which may be embodied as hardware, firmware,
software, virtualized hardware, emulated architecture, and/or a
combination thereof as discussed above, is configured to facilitate
inbound and outbound network communications (e.g., network traffic,
network packets, network flows, etc.) to and from the orchestrator
server 1240, respectively. To do so, the network communicator 1420
is configured to receive and process data packets from one system
or computing device (e.g., the client device 1220) and to prepare
and send data packets to another computing device or system (e.g.,
the managed nodes 1260). Accordingly, in some embodiments, at least
a portion of the functionality of the network communicator 1420 may
be performed by the communication circuitry 1308, and, in the
illustrative embodiment, by the NIC 1310.
[0063] The telemetry monitor 1430, which may be embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof as discussed above, is
configured to collect status data (e.g., telemetry data 1402 and
managed node availability data 1412) from the managed nodes 1260 as
the managed nodes 1260 execute the workloads assigned to them. The
telemetry monitor 1430 may actively poll each of the managed nodes
1260 for updated status data on an ongoing basis or may passively
receive the status data from the managed nodes 1260, such as by
listening on a particular network port for updated status data. The
telemetry monitor 1430 may further parse and categorize the status
data, such as by separating the status data into an individual file
or data set for each managed node 1260. In the illustrative
embodiment, the telemetry monitor 1430 includes a node availability
data collector 1432 to receive and parse the availability data 1412
for each of the managed nodes 1260. The node availability data
collector 1432, in the illustrative embodiment, may receive
availability data 1412 from one or more managed nodes 1260 on
behalf of multiple other managed nodes 1260, rather than receiving
the availability data directly from each managed node 1260. In such
embodiments, the node availability data collector 1432 may parse an
aggregated set of availability data 1412 received from one of the
managed nodes 1260 to identify which portions of the availability
data 1412 pertain to which managed nodes 1260. The node
availability data collector 1432 may also overwrite earlier
availability data for a particular managed node 1260 with updated
availability data 1412, compare a present time to a time stamp
associated with existing availability data 1412 from a managed node
1260 to determine whether the availability data 1412 is potentially
outdated (i.e., older than a predefined time period), and, in
response to a determination that the availability data 1412 is
potentially outdated, prompt the corresponding managed nodes 1260
for updated availability data 1412.
[0064] The policy manager 1440, which may be embodied as hardware,
firmware, software, virtualized hardware, emulated architecture,
and/or a combination thereof as discussed above, is configured to
receive and store the policy data 1404, which, as described above,
is indicative of user-defined preferences pertaining to operating
parameters of the components of the managed nodes 1260 that may
affect, among other items, heat production, power consumption,
and/or life expectancy (i.e., wear) of the managed nodes 1260. The
policy manager 1440 is further configured to provide the policy
data 1404 to the resource manager 1450 to assist in determining
adjustments to the assignment of workloads among the managed nodes
1260 and for adjusting settings within one or more of the managed
nodes (e.g., processor capacity available to one or more workloads,
memory resource capacity available to one or more workloads, and/or
communication circuitry capacity available to one or more
workloads) to optimize resource utilization, subject to the
policies defined in the policy data 1404.
[0065] The resource manager 1450, which may be embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof, is configured to
generate data analytics from the telemetry data 1402, identify the
workloads, classify the workloads, identify trends in the resource
utilization of the workloads, predict future resource utilizations
of the workloads, and adjust the assignments of the workloads to
the managed nodes 1260 and the settings of the managed nodes 1260
to increase the resource utilization (e.g., to reduce the amount of
idle resources) while staying in compliance with the policy data
1404. For efficiency, in the illustrative embodiment, the resource
manager 1450 limits the above analysis to the managed nodes 1260
that reported an availability to receive an additional workload,
thereby significantly reducing the computational burden on the
orchestrator server 1240 in assigning and balancing workloads
across the managed nodes 1260. In the illustrative embodiment, the
resource manager 1450 includes an analysis limiter 1452, a workload
labeler 1454, a workload classifier 1456, a workload behavior
predictor 1458, a workload placer 1460, and a node settings
adjuster 1462. The analysis limiter 1452, in the illustrative
embodiment, is configured to analyze the availability data 1412 and
generate, as a function of the availability data, a "short list"
(i.e., a reduced set) of the managed nodes 1260 for analysis by the
workload labeler 1454, the workload classifier 1456, the workload
behavior predictor 1458, the workload placer 1460, and the node
settings adjuster 1462. In the illustrative embodiment, the
analysis limiter 1452 adds to the reduced set, identifiers of the
managed nodes 1260 that indicated, in the availability data 1412,
that they are available to receive an additional workload and
excludes the managed nodes 1260 that indicated an unavailability to
receive an additional workload.
[0066] The workload labeler 1454, in the illustrative embodiment,
is configured to assign a workload label 1406 to each workload
presently performed or scheduled to be performed by one or more of
the managed nodes 1260 in the reduced set. The workload labeler
1454 may generate the workload label 1406 as a function of an
executable name of the workload, a hash of all or a portion of the
code of the workload, or based on any other method to uniquely
identify each workload. The workload classifier 1456, in the
illustrative embodiment, is configured to categorize each labeled
workload based on the resource utilization usage of each workload.
For example, the workload classifier 1456 may categorize one set of
labeled workloads as being consistently processor intensive,
another set of labeled workloads as being consistently memory
intensive, and another set of workloads as having phases of
different resource utilization (high memory use and low processor
use, followed by high processor use and low memory use, etc.).
[0067] The workload behavior predictor 1458, in the illustrative
embodiment, is configured to analyze the telemetry data 1402 and
the workload classifications 1408 to predict future resource
utilization needs of the various workloads based on their previous
usage. In doing so, the workload behavior predictor 1458 may
determine a present phase of a given workload and determine an
amount of remaining time until the workload transitions to another
phase having different resource utilization characteristics. The
workload placer 1460, in the illustrative embodiment, is configured
to initially assign workloads to the various managed nodes 1260 in
the reduced set generated by the analysis limiter 1452, and
determine, based on the telemetry data 1402, the workload
classifications 1408, and the policy data 1404, whether the
resources of the managed nodes 1260 could be more efficiently used
(e.g., to reduce the amount of idle resources and to reduce the
load on over-used resources) by reassigning the workloads among the
managed nodes 1260, without violating the policies in the policy
data (e.g., without generating more than a threshold amount of
heat, without consuming more than a threshold amount of power,
etc.). Similarly, the node settings adjuster 1462, in the
illustrative embodiment, is configured to determine one or more
adjustments to the settings within the reduced set of managed nodes
1260 to provide or restrict the resources available to the
workloads in accordance with the goal of optimizing resource usage
and maintaining conformance with the policies in the policy data
1404. The settings may be associated with the operating system
and/or the firmware or drivers of the components of the managed
nodes 1260.
[0068] It should be appreciated that each of the analysis limiter
1452, workload labeler 1454, the workload classifier 1456, the
workload behavior predictor 1458, the workload placer 1460, and the
node settings adjuster 1462 may be separately embodied as hardware,
firmware, software, virtualized hardware, emulated architecture,
and/or a combination thereof. For example, the analysis limiter
1452 may be embodied as a hardware component, while the workload
labeler 1454, the workload classifier 1456, the workload behavior
predictor 1458, the workload placer 1460, and the node settings
adjuster 1462 are embodied as a virtualized hardware component or
as some other combination of hardware, firmware, software,
virtualized hardware, emulated architecture, and/or a combination
thereof. Each of the components of the environment 1400 may be
embodied as hardware, firmware, software, or a combination
thereof.
[0069] Referring now to FIG. 15, in the illustrative embodiment,
each managed node 1260 may establish an environment 1500 during
operation. The illustrative environment 1500 includes a network
communicator 1520, a workload executor 1530, a telemetry data
generator 1540, and an availability data manager 1550. As such, in
some embodiments, one or more of the components of the environment
1500 may be embodied as circuitry or a collection of electrical
devices (e.g., network communicator circuitry 1520, workload
executor circuitry 1530, telemetry data generator circuitry 1540,
availability data manager circuitry 1550, etc.). It should be
appreciated that, in such embodiments, one or more of the network
communicator circuitry 1520, workload executor circuitry 1530,
telemetry data generator circuitry 1540, or availability data
manager circuitry 1550 may form a portion of one or more of the CPU
1302, the main memory 1304, the I/O subsystem 1306, and/or other
components of the managed node 1260. In the illustrative
embodiment, the environment 1500 includes node identification data
1502 which may be embodied as any data that uniquely identifies the
managed node 1260 (e.g., a serial number, a media access control
address, or other unique identifier) and may be added to the
telemetry data 1506 and/or the availability data 1508 described
below to facilitate parsing and categorization of the data by the
orchestrator server 1240. The illustrative environment 1500 also
includes workload data 1504 which may be embodied as any data
indicative of the workloads presently assigned to the managed node
1260 and a priority associated with the workload (e.g., normal
priority, high priority, etc.). The telemetry data 1506 is similar
to the telemetry data 1402 described above with reference to FIG.
14, except the telemetry data 1506, in the illustrative embodiment,
pertains specifically to the present managed node 1260 rather than
multiple managed nodes 1260. Additionally, in the illustrative
embodiment, the environment 1500 includes availability data 1508,
which is similar to the availability data 1412, except the
availability data 1508 pertains specifically to the present managed
node 1260 and any other managed nodes 1260 that the present managed
node collected availability data 1508 from, as described in more
detail herein.
[0070] In the illustrative environment 1500, the network
communicator 1520, which may be embodied as hardware, firmware,
software, virtualized hardware, emulated architecture, and/or a
combination thereof as discussed above, is configured to facilitate
inbound and outbound network communications (e.g., network traffic,
network packets, network flows, etc.) to and from the managed node
1260, respectively. To do so, the network communicator 1520 is
configured to receive and process data packets from one system or
computing device (e.g., the client device 1220, the orchestrator
server 1240, and/or another managed node 1260) and to prepare and
send data packets to another computing device or system (e.g., the
client device 1220, the orchestrator server 1240, and/or one
another managed node 1260). Accordingly, in some embodiments, at
least a portion of the functionality of the network communicator
1520 may be performed by the communication circuitry 1308, and, in
the illustrative embodiment, by the NIC 1310.
[0071] The workload executor 1530, which may be embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof as discussed above, is
configured to execute workloads assigned to the managed node 1260.
The telemetry data generator 1540, which may be embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof as discussed above, is
configured to monitor the performance and conditions within the
managed node 1260 as the one or more workloads are executed and
generate the telemetry data 1506.
[0072] The availability data manager 1550, which may be embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof as discussed above, is
configured to generate the availability data 1508 and report the
availability data 1508 either directly to the orchestrator server
1240 or to another managed node 1260. The availability data manager
1550 may additionally aggregate the availability data 1508 from one
or more other managed nodes 1260, such as managed nodes 1260 having
a predefined relationship to the managed node 1260 (e.g., within a
predefined logical proximity of the managed node 1260, such as on
the same network switch), identified in a predefined set of managed
nodes 1260 from which to collect the availability data 1508, or
identified as managed nodes 1260 to collect the availability data
1508 from, pursuant to a swarm intelligence algorithm, such as a
bee foraging algorithm. To do so, in the illustrative embodiment,
the availability data manager 1550 includes an availability data
determiner 1552, an availability data reporter 1554, and an
availability data aggregator 1556.
[0073] The availability data determiner 1552, in the illustrative
embodiment, is configured to compare resource utilization values
(e.g., processor utilization, memory utilization, network bandwidth
utilization, etc.) in the telemetry data 1506 to a set of
predefined threshold values such as a processor utilization
threshold, a memory usage threshold, and/or a network bandwidth
threshold to determine an availability of the managed node 1260 to
receive and execute an additional workload. Accordingly, if one or
more of the existing utilizations of one or more of the resources
in the managed node 1260 is in excess of a corresponding predefined
threshold, the availability data determiner 1552 may store, in the
availability data, an indication that the managed node 1260 is
presently unavailable to execute an additional workload. Otherwise,
the availability data determiner 1552 may store an indication that
the managed node 1260 is presently available to execute an
additional workload. Furthermore, in the illustrative embodiment,
the availability data determiner 1552 may select one of multiple
sets of predefined threshold values as a function of the priorities
assigned to the existing workloads. In the illustrative embodiment,
if one or more of the existing workloads has a high priority,
meaning the workload is to be executed at a predictable speed, the
availability data determiner 1552 may select a set of corresponding
predefined thresholds with lower resource utilization values than
if none of the workloads have been designated as high priority.
Doing so may protect high priority workloads from possible
interruption from additional workloads, while enabling managed
nodes 1260 without high priority workloads to take on additional
work.
[0074] The availability data reporter 1554, in the illustrative
embodiment, is configured to report the availability data 1508 to
the orchestrator server 1240, either directly or through another
managed node 1260. The availability data reporter 1554 may report
the availability data 1508 on a repeating, periodic basis without
prompting from another compute device, or may report the
availability data 1508 in response to a query from the orchestrator
server 1240 or another managed node 1260. The availability data
aggregator 1556, in the illustrative embodiment, is configured to
aggregate availability data 1508 from at least one other managed
node 1260. In doing so, the availability data aggregator may
receive the availability data 1508 from one or more managed nodes
1260 that have a predefined relationship to the present managed
node 1260, that are listed in a predefined set of managed nodes
1260 from which to receive availability data 1508, or that are
otherwise identified to the managed node 1260, such as pursuant to
a swarm intelligence algorithm. In a swarm intelligence algorithm,
the availability data aggregator 1556 may determine that one or
more managed nodes 1260 are within an "area" (e.g., a set of
managed nodes 1260) that has historically been available to take on
additional workloads. As such, the managed nodes 1260 within such
areas are more frequently checked for their availability to execute
additional workloads. In some embodiments, the availability data
aggregator 1556 may provide identifiers of managed nodes 1260 in
such an area to other managed nodes 1260 that are responsible for
aggregating and reporting back availability data to the
orchestrator server 1240. In response, those managed nodes 1260 may
frequently check the availability of managed nodes 1260 in that
area and/or other nearby managed nodes 1260 (e.g., within the same
rack, connected to the same switch, or otherwise within a
predefined range from a physical or network topology perspective).
As such, the managed nodes 1260 may exhibit a swarm intelligence
when identifying sets of managed nodes 1260 available to perform
additional workloads.
[0075] It should be appreciated that each of the availability data
determiner 1552, the availability data reporter 1554, and the
availability data aggregator 1556 may be separately embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof. For example, the
availability data determiner 1552 may be embodied as a hardware
component, while the availability data reporter 1554 and the
availability data aggregator 1556 are embodied as a virtualized
hardware component or as some other combination of hardware,
firmware, software, virtualized hardware, emulated architecture,
and/or a combination thereof. Each of the components of the
environment 1500 may be embodied as hardware, firmware, software,
or a combination thereof.
[0076] Referring now to FIG. 16, in use, the orchestrator server
1240 may execute a method 1600 for managing workloads using
availability data generated by the managed nodes 1260. The method
1600 begins with block 1602, in which the orchestrator server 1240
determines whether to manage workloads performed by the managed
nodes 1260. In the illustrative embodiment, the orchestrator server
1240 determines to manage workloads if the orchestrator server 1240
is powered on, in communication with the managed nodes 1260, and
has received at least one request from the client device 1220 to
provide cloud services (i.e., to perform one or more workloads). In
other embodiments, the orchestrator server 1240 may determine
whether to manage workloads based on other factors. Regardless, in
response to a determination to manage workloads, in the
illustrative embodiment, the method 1600 advances to block 1604 in
which the orchestrator server 1240 receives policy data (e.g., the
policy data 1404). In doing so, the orchestrator server 1240 may
receive the policy data 1404 from a user (e.g., an administrator)
through a graphical user interface (not shown), from a
configuration file, or from another source. In receiving the policy
data 1404, the orchestrator server 1240 may receive service life
cycle policy data indicative of a target life cycle of one or more
of the managed nodes 1260. Additionally or alternatively, the
orchestrator server 1240 may receive power consumption policy data
1404 indicative of a target power usage or threshold amount of
power usage of the managed nodes 1260 as they execute the
workloads. The orchestrator server 1240 may additionally or
alternatively receive thermal policy data indicative of a target
temperature or a temperature threshold not to be exceeded by the
managed nodes 1260 as they execute the workloads. Additionally or
alternatively the orchestrator server 1240 may receive other types
of policy data indicative of thresholds or goals to be satisfied
during the execution of the workloads.
[0077] After receiving the policy data 1404, in the illustrative
embodiment, the method 1600 advances to block 1606 in which the
orchestrator server 1240 assigns initial workloads to the managed
nodes 1260. In the illustrative embodiment, the orchestrator server
1240 has not received telemetry data 1402 that would inform a
decision as to where the workloads are to be assigned among the
managed nodes 1260. As such, the orchestrator server 1240 may
assign the workloads to the managed nodes 1260 based on any
suitable method, such as assigning each workload to the first
available managed node that is idle (i.e., is not presently
executing a workload), randomly assigning the workloads, or by any
other method. In the illustrative embodiment, as indicated in block
1606, in assigning the initial workloads to the managed nodes 1260,
the orchestrator server 1240 may assign a priority to each of the
workloads, such as by storing an indicator of the priority in data
describing each workload (e.g., the workload data 1504). In doing
so, the orchestrator server 1240 may assign a normal priority to
one or more of the workloads, as indicated in block 1610. In the
illustrative embodiment, a normal priority is a priority in which
the workload is not required to produce output at specific
instances in time. Alternatively, as indicated in block 1612, the
orchestrator server 1240 may assign a deterministic execution
priority (i.e., a high priority) to one or more of the workloads,
indicating that the workload is to be executed in a predictable
manner and produce outputs at specific times. The priorities may be
determined based on input from the client device 1220, such as a
selection of the desired responsiveness and speed of the services
to be provided by the system 1210. In the illustrative embodiment,
the orchestrator server 1240 may generate initial availability data
based on the assignment of the workloads among the managed nodes
1260, as indicated in block 1614. In doing so, the orchestrator
server 1240 may estimate an expected amount of resources that will
be consumed by each workload, based on the priorities associated
with the workloads and/or based on previously generated profiles
(e.g., workload classifications 1408) if such data is presently
available to the orchestrator server 1240.
[0078] After assigning the initial workloads to the managed nodes
1260, the method 1600 advances to block 1616 in which the
orchestrator server 1240 receives status data from the managed
nodes 1260 as the workloads are performed (i.e., executed). In
receiving the status data, the orchestrator server 1240 receives
the availability data 1412 from one or more of the managed nodes
1260 indicating the availability of each managed node 1260 to
receive and perform an additional workload, as represented in block
1618. Further, in receiving the availability data 1412, the
orchestrator server 1240, in the illustrative embodiment,
determines a reduced set of available nodes from the availability
data 1412. In the illustrative embodiment, the reduced set of
available nodes is the subset of the managed nodes 1260 that
reported that they are available to receive and execute an
additional workload. Additionally, in receiving the status data,
the orchestrator server 1240 receives the telemetry data 1402 from
the managed nodes 1260 as the workloads are performed (i.e.,
executed), as indicated in block 1622. In doing so, the
orchestrator server 1240 may receive temperature data indicative of
a temperature within each managed node 1260, power consumption data
indicative of an amount of power consumed by each managed node
1260, processor utilization data indicative of an amount of
processor usage consumed by each workload performed by each managed
node 1260, memory utilization data for each managed node 1260
(cache utilization data, other volatile memory utilization, and/or
non-volatile memory utilization), network utilization data
indicative of an amount of network bandwidth used by each workload
performed by each managed node 1260, and/or data indicative of
other conditions within each managed node 1260. After receiving the
status data, the orchestrator server 1240 generates data analytics,
as described below.
[0079] Referring now to FIG. 17, in block 1624, the orchestrator
server 1240 generates data analytics as the workloads are performed
by the managed nodes 1260. In generating the data analytics, in the
illustrative embodiment, the orchestrator server 1240 limits the
generation of the data analytics to the reduced set of available
managed nodes 1260, determined in block 1620. By limiting the data
analytics to the reduced set of available managed nodes 1260, the
orchestrator server 1240 may vastly reduce the amount of
calculations that would otherwise be performed to determine which
managed nodes 1260 are to receive adjustments to their workloads,
without overlooking managed nodes 1260 that have the capacity to
execute an additional workload. In block 1628, the orchestrator
server 1240 identifies trends in the resource utilization of the
workloads. For example, the orchestrator server 1240 may identify
patterns in which one or more of the workloads cycle through phases
of high processor utilization with low memory usage, followed by
low processor utilization and high memory usage, or other phases.
As indicated in block 1630, in the illustrative embodiment, the
orchestrator server 1240 generates profiles of the workloads. In
doing so, in the illustrative embodiment, the orchestrator server
1240 generates the labels 1406 for the workloads to uniquely
identify each workload, as indicated in block 1632. Additionally,
in the illustrative embodiment, the orchestrator server 1240
generates the classifications 1408 of the workloads, as indicated
in block 1634. In the illustrative embodiment, as indicated in
block 1636, in generating the data analytics, the orchestrator
server 1240 also predicts future resource utilization of the
workloads, such as by comparing a present resource utilization of
each workload to the trends identified in block 1628 to determine
the present phase of each workload, and then identifying the
upcoming phases of the workloads from the trends.
[0080] In block 1638, the orchestrator server 1240 determines, as a
function of the data analytics, adjustments to the workload
assignments as the workloads are performed, to improve resource
utilization. In block 1640, the orchestrator server 1240 may add or
change workload assignments among the managed nodes 1260. In doing
so, the orchestrator server 1240 may identify one or more available
managed nodes 1260 executing workloads with relatively low resource
utilization and assign additional workloads to those managed nodes
1260. As stated above, the orchestrator may also reassign workloads
among the managed nodes 1260. For example, the orchestrator server
1240 may identify, based on the data analytics, workloads having
complementary resource utilizations (e.g., a workload with a high
processor utilization and low memory utilization and another
workload with low processor utilization and high memory
utilization), and assign those two workloads to the same managed
node 1260 to improve the resource utilization. In the illustrative
embodiment, the orchestrator server 1240 limits the additions and
changes to the workload assignments to only the reduced set of
available managed nodes 1260.
[0081] The orchestrator server 1240 may additionally determine
node-specific adjustments, as indicated in block 1644. The
node-specific adjustments may be embodied as changes to settings
within one or more of the managed nodes 1260, such as in the
operating system, the drivers, and/or the firmware of components
(e.g., the CPU 1302, the memory 1304, the communication circuitry
1308, the one or more data storage devices 1312, etc.) to improve
resource utilization. As such, in the illustrative embodiment, in
determining the node-specific adjustments, the orchestrator server
1240 may determine processor throttle adjustments, such as clock
speed and/or processor affinity for one or more workloads, memory
usage adjustments, such as allocations of volatile memory (e.g.,
the memory 1304) and/or data storage capacity (e.g., capacity of
the one or more data storage devices 1312), memory bus speeds,
and/or other memory-related settings, network bandwidth
adjustments, such as an available bandwidth of the communication
circuitry 1308 to be allocated to each workload, and/or one or more
fan speed adjustments to increase or decrease the cooling within
the managed node 1260. In doing so, in the illustrative embodiment,
the orchestrator server 1240 limits the node-specific adjustments
to the reduced set of available managed nodes 1260. In block 1564,
the orchestrator server 1240 may modify the adjustments to the
assignments of the workloads and/or to the node-specific
adjustments to comply with the policy data 1404. As an example, the
policy data 1404 may indicate that the power consumption is not to
exceed a predefined threshold and, in view of the threshold, the
orchestrator server 1240 may determine to reduce the speed of the
CPU 1302 to satisfy the threshold and reassign a
processor-intensive workload away from the managed node 1260
because, at the reduced speed, the CPU 1302 would be unable to
complete the processor-intensive workload within a predefined time
period (e.g., a time period specified in a Service Level Agreement
(SLA) between the user of the client device 1220 and the operator
of the system 1210).
[0082] Referring now to FIG. 18, in block 1650, the orchestrator
server 1240 determines whether adjustments were determined. If not,
the method 1600 loops back to block 1616 of FIG. 16, in which the
orchestrator server 1240 again receives the status data from the
managed nodes 1260 as the workloads are performed. Otherwise, if
adjustments were determined, the method 1600 advances to block 1652
in which the orchestrator server 1240 applies the determined
adjustments. In doing so, the orchestrator server 1240 may issue
one or more requests to perform a live migration of a workload
between two managed nodes 1260 (i.e., a workload reassignment). In
the illustrative embodiment, the migration is live because, rather
than waiting until the workloads have been completed to analyze the
telemetry data 1402, the orchestrator server 1240 collects and
analyzes the telemetry data 1402, and makes adjustments online
(i.e., as the workloads are being performed). Additionally or
alternatively, as indicated in block 1572, the orchestrator server
1240 may issue one or more requests to one or more of the managed
nodes 1260 to apply the node-specific adjustments described above
with reference to block 1644 of FIG. 17. After applying the
adjustments, the method 1600 loops back to block 1616 of FIG. 16 in
which the orchestrator server 1240 receives additional status data
from the managed nodes 1260. It should be understood from the above
description that, in the illustrative embodiment, any adjustments
made in block 1652 are to managed nodes 1260 that reported
themselves as being available in the availability data 1412 (i.e.,
the reduced set of managed nodes determined in block 1620).
[0083] Referring now to FIG. 19, in use, a managed node 1260 may
execute a method 1900 for generating and reporting availability
data to assist in the management of workloads. The method 1900
begins with block 1902 in which the managed node 1260 determines
whether to proceed with operation. In the illustrative embodiment,
the managed node 1260 may determine to proceed if the managed node
1260 is receiving power and is connected to the orchestrator server
1240. In other embodiments, the managed node 1260 may determine
whether to proceed based on one or more other factors. Regardless,
in response to a determination to proceed, the method 1900 advances
to block 1904, in which the managed node 1260 receives a workload
assignment from the orchestrator server 1240. In doing so, the
managed node 1260 may receive an indication of the priority of the
workload (e.g., a priority indicator included in workload data 1504
provided by the orchestrator server 1240), as indicated in block
1906. In receiving the indication of the priority, the managed node
1260 may receive an indication that the received workload is to be
executed deterministically (e.g., high priority), as indicated in
block 1908. Alternatively, the managed node 1260 may receive an
indication that the workload is to be executed with normal
priority, as indicated in block 1910. As indicated in block 1912,
in receiving a workload assignment, the managed node 1260 may
perform a live migration of a workload from another managed node
1260.
[0084] After receiving the workload assignments, the managed node
1260 may receive node-specific adjustments from the orchestrator
server 1240, such as changes to settings in the operating system,
the drivers, and/or the firmware of components (e.g., the CPU 1302,
the memory 1304, the communication circuitry 1308, the one or more
data storage devices 1312, etc.) to alter the power and/or resource
utilization of the managed node 1260. In block 1916, the managed
node 1260 executes the assigned workload. In doing so, the managed
node 1260 may apply the node-specific adjustments received in block
1914. Subsequently, as indicated in block 1920, the managed node
1260 may receive a request for availability data. In receiving the
request for availability data, the managed node 1260 may receive
the request from the orchestrator server 1240 as indicated in block
1922. Alternatively, the managed node 1260 may receive the request
from another managed node 1260, as indicated in block 1924.
[0085] Referring now to FIG. 20, in block 1926, the managed node
1260 generates telemetry data (e.g., the telemetry data 1506). In
generating the telemetry data 1506, the managed node 1260 may
generate temperature data indicative of one or more temperatures in
the managed node 1260, as indicated in block 1928. Additionally or
alternatively, the managed node 1260 may generate power consumption
data indicative of an amount of power presently consumed by the
managed node 1260 while executing workloads assigned to it, as
indicated in block 1930. As indicated in block 1932, the managed
node 1260 may additionally or alternatively generate processor
utilization data indicative of the amount of the available
computational capacity of the processor presently used to execute
workloads assigned to the managed node 1260. The managed node 1260
may additionally or alternatively generate memory utilization data
indicative of a presently used amount, or a frequency of use, of
the available memory resources in managed node 1260, as indicated
in block 1934. Additionally or alternatively, the managed node 1260
may generate network utilization data indicative of an amount of
network bandwidth presently used by the managed node 1260.
[0086] After the managed node 1260 generates the telemetry data
1506, the method 1900 advances to block 1938, in which the managed
node 1260 compares the telemetry data 1506 to one or more
predefined thresholds to determine an availability of the managed
node 1260 to receive and execute an additional workload. In doing
so, the managed node 1260 may select a set of predefined thresholds
as a function of the indication of the priority of the workload
(e.g., an indication of the priority in the workload data 1504).
For example, if an assigned workload has been designated as high
priority (e.g., to be executed deterministically) the managed node
1260 may select a set of predefined thresholds with lower values
that, if exceeded, would cause the managed node 1260 to be deemed
unavailable to take on an additional workload. As such, the
processor utilization threshold when the managed node 1260 is
executing a high priority workload may be a lower value (e.g., 70%)
than the processor utilization threshold (e.g., 80%) if the managed
node 1260 is presently only executing workloads that do not have
high priority. As indicated in block 1942, the managed node 1260
may compare the processor utilization to a predefined processor
availability threshold. Additionally or alternatively, the managed
node 1260 may compare the memory utilization data to a predefined
memory availability threshold, as indicated in block 1944, and/or
may compare other components of the telemetry data 1506 to
corresponding availability thresholds (e.g., a predefined network
bandwidth availability threshold, a predefined power consumption
availability threshold, a predefined temperature availability
threshold, etc.), as indicated in block 1946.
[0087] In block 1948, the managed node 1260 determines whether the
thresholds were satisfied. In the illustrative embodiment, if any
of the values in the telemetry data 1506 exceeded a corresponding
predefined threshold, the managed node 1260 determines that the
thresholds were not satisfied. In other embodiments, the managed
node 1260 may determine whether the thresholds were satisfied based
on another scheme (e.g., whether a majority of the predefined
thresholds were exceeded, etc.). Regardless, in response to a
determination that the thresholds were not satisfied, the method
1900 advances to block 1950 in which the managed node 1260 stores
an indication of non-availability in the availability data 1508.
Otherwise, the method 1900 advances to block 1952, in which the
managed node 1260 stores an indication that the managed node 1260
is available in the availability data 1508. In either case, the
method 1900 proceeds with the collection and reporting of the
availability data 1508 to the orchestrator server 1240, as
described herein.
[0088] Referring now to FIG. 21, the managed node 1260 may receive
availability data 1508 from one or more other managed nodes 1260,
as indicated in block 1954. In doing so, the managed node 1260 may
receive availability data 1508 from one or more managed nodes 1260
having a predefined relationship to the present managed node 1260,
as indicated in block 1956. For example, as indicated in block
1958, the managed node 1260 may receive availability data 1508 from
one or more managed nodes 1260 identified in a predefined set of
managed nodes 1260. Alternatively, the managed node 1260 may
receive availability data 1508 from one or more managed nodes 1260
within a predefined proximity of the present managed node 1260, as
indicated in block 1960. As indicated in block 1962, the managed
node 1260 may receive availability data 1508 from one or more
managed nodes pursuant to a foraging algorithm, such as a bee
foraging algorithm, as described above.
[0089] In block 1964, the managed node 1260 reports status data. In
doing so, as indicated in block 1966, the managed node reports the
availability data 1508. In reporting the availability data, the
managed node 1260 may report the availability data to the
orchestrator server 1240 directly, as indicated in block 1968.
Alternatively, the managed node 1260 may report the availability
data to another managed node 1260 to be collected (i.e.,
aggregated) and reported back to the orchestrator server 1240. In
block 1974, the managed node 1260 also reports the telemetry data
1506 to the orchestrator server 1240. After the managed node 1260
has reported the status data, the method 1900 loops back to block
1902 in which the managed node 1260 determines whether to continue
operations (i.e., to repeat the method 1900).
EXAMPLES
[0090] Illustrative examples of the technologies disclosed herein
are provided below. An embodiment of the technologies may include
any one or more, and any combination of, the examples described
below.
[0091] Example 1 includes an orchestrator server to utilize
availability data for a set of managed nodes to assign workloads,
the orchestrator server comprising one or more processors; one or
more memory devices having stored therein a plurality of
instructions that, when executed by the one or more processors,
cause the orchestrator server to assign workloads to the managed
nodes; receive availability data from the managed nodes, wherein
the availability data is indicative of a determination by each of
the managed nodes as to an availability of the managed node to
receive an additional workload; receive telemetry data from the
managed nodes, wherein the telemetry data is indicative of resource
utilization by each of the managed nodes as the workloads are
performed; determine, as a function of the availability data, a
reduced set of available managed nodes for analysis; determine, as
a function of the telemetry data, adjustments to the workload
assignments to increase the resource utilization among the reduced
set of managed nodes; and apply the determined adjustments to the
reduced set of managed nodes as the workloads are performed.
[0092] Example 2 includes the subject matter of Example 1, and
wherein to assign the workloads comprises to assign a priority to
one or more of the workloads.
[0093] Example 3 includes the subject matter of any of Examples 1
and 2, and wherein to assign a priority to one or more of the
workloads comprises to assign a deterministic execution priority to
one or more of the workloads.
[0094] Example 4 includes the subject matter of any of Examples
1-3, and wherein to assign the workloads comprises to generate
availability data as a function of the assignment of the
workloads.
[0095] Example 5 includes the subject matter of any of Examples
1-4, and wherein to determine, as a function of the telemetry data,
adjustments to the workload assignments comprises to generate, as a
function of the telemetry data, data analytics as the workloads are
performed.
[0096] Example 6 includes the subject matter of any of Examples
1-5, and wherein to generate data analytics comprises to limit the
generation of the data analytics to the reduced set of managed
nodes.
[0097] Example 7 includes the subject matter of any of Examples
1-6, and wherein to generate data analytics comprises to identify
trends in resource utilization of the workloads performed by the
managed nodes in the reduced set of managed nodes.
[0098] Example 8 includes the subject matter of any of Examples
1-7, and wherein to generate data analytics comprises to generate
profiles of the workloads performed by the managed nodes in the
reduced set of managed nodes.
[0099] Example 9 includes the subject matter of any of Examples
1-8, and wherein to generate data analytics comprises to predict
future resource utilization of the workloads performed by the
managed nodes in the reduced set of managed nodes.
[0100] Example 10 includes the subject matter of any of Examples
1-9, and wherein the plurality of instructions, when executed by
the one or more processors, further the cause the orchestrator
server to obtain policy data indicative of one or more goals to be
achieved in the management of the workloads; and modify the
adjustments as a function of the policy data.
[0101] Example 11 includes the subject matter of any of Examples
1-10, and wherein to determine the adjustments comprises to
determine one or more node-specific adjustments indicative of
changes to an availability of one or more resources of a managed
node in the reduced set of managed nodes to one or more of the
workloads performed by the managed node.
[0102] Example 12 includes the subject matter of any of Examples
1-11, and wherein to determine the node-specific adjustments
comprises to determine at least one of a processor throttle
adjustment, a memory usage adjustment, a network bandwidth
adjustment, or a fan speed adjustment.
[0103] Example 13 includes the subject matter of any of Examples
1-12, and wherein to apply the determined adjustments comprises to
issue a request to perform a live migration of a workload between
the managed nodes.
[0104] Example 14 includes the subject matter of any of Examples
1-13, and wherein to apply the determined adjustments comprises to
issue a request to one of the managed nodes to apply one or more
node-specific adjustments indicative of changes to an availability
of one or more resources of the managed node to one or more of the
workloads performed by the managed node.
[0105] Example 15 includes a method for utilizing availability data
for a set of managed nodes to assign workloads, the method
comprising assigning, by an orchestrator server, workloads to the
managed nodes; receiving, by the orchestrator server, availability
data from the managed nodes, wherein the availability data is
indicative of a determination by each of the managed nodes as to an
availability of the managed node to receive an additional workload;
receiving, by the orchestrator server, telemetry data from the
managed nodes, wherein the telemetry data is indicative of resource
utilization by each of the managed nodes as the workloads are
performed; determining, by the orchestrator server and as a
function of the availability data, a reduced set of available
managed nodes for analysis; determining, by the orchestrator server
and as a function of the telemetry data, adjustments to the
workload assignments to increase the resource utilization among the
reduced set of managed nodes; and applying, by the orchestrator
server, the determined adjustments to the reduced set of managed
nodes as the workloads are performed.
[0106] Example 16 includes the subject matter of Example 15, and
wherein assigning the workloads comprises assigning a priority to
one or more of the workloads.
[0107] Example 17 includes the subject matter of any of Examples 15
and 16, and wherein assigning a priority to one or more of the
workloads comprises assigning a deterministic execution priority to
one or more of the workloads.
[0108] Example 18 includes the subject matter of any of Examples
15-17, and wherein assigning the workloads comprises generating
availability data as a function of the assignment of the
workloads.
[0109] Example 19 includes the subject matter of any of Examples
15-18, and wherein determining, as a function of the telemetry
data, adjustments to the workload assignments comprises generating,
as a function of the telemetry data, data analytics as the
workloads are performed.
[0110] Example 20 includes the subject matter of any of Examples
15-19, and wherein generating data analytics comprises limiting the
generation of the data analytics to the reduced set of managed
nodes.
[0111] Example 21 includes the subject matter of any of Examples
15-20, and wherein generating data analytics comprises identifying
trends in resource utilization of the workloads performed by the
managed nodes in the reduced set of managed nodes.
[0112] Example 22 includes the subject matter of any of Examples
15-21, and wherein generating data analytics comprises generating
profiles of the workloads performed by the managed nodes in the
reduced set of managed nodes.
[0113] Example 23 includes the subject matter of any of Examples
15-22, and wherein generating data analytics comprises predicting
future resource utilization of the workloads performed by the
managed nodes in the reduced set of managed nodes.
[0114] Example 24 includes the subject matter of any of Examples
15-23, and further including obtaining, by the orchestrator server,
policy data indicative of one or more goals to be achieved in the
management of the workloads; and modifying, by the orchestrator
server, the adjustments as a function of the policy data.
[0115] Example 25 includes the subject matter of any of Examples
15-24, and wherein determining the adjustments comprises
determining one or more node-specific adjustments indicative of
changes to an availability of one or more resources of a managed
node in the reduced set of managed nodes to one or more of the
workloads performed by the managed node.
[0116] Example 26 includes the subject matter of any of Examples
15-25, and wherein determining the node-specific adjustments
comprises determining at least one of a processor throttle
adjustment, a memory usage adjustment, a network bandwidth
adjustment, or a fan speed adjustment.
[0117] Example 27 includes the subject matter of any of Examples
15-26, and wherein applying the determined adjustments comprises
issuing a request to perform a live migration of a workload between
the managed nodes.
[0118] Example 28 includes the subject matter of any of Examples
15-27, and wherein applying the determined adjustments comprises
issuing a request to one of the managed nodes to apply one or more
node-specific adjustments indicative of changes to an availability
of one or more resources of the managed node to one or more of the
workloads performed by the managed node.
[0119] Example 29 includes one or more machine-readable storage
media comprising a plurality of instructions stored thereon that in
response to being executed, cause an orchestrator server to perform
the method of any of Examples 15-28.
[0120] Example 30 includes an orchestrator server to manage
workloads among a plurality of managed nodes coupled to a network,
the orchestrator server comprising one or more processors;
communication circuitry coupled to the one or more processors; one
or more memory devices having stored therein a plurality of
instructions that, when executed by the one or more processors,
cause the orchestrator server to perform the method of any of
Examples 15-28.
[0121] Example 31 includes an orchestrator server to utilize
availability data for a set of managed nodes to assign workloads,
the orchestrator server comprising resource manager circuitry to
assign workloads to the managed nodes; telemetry monitor circuitry
to receive availability data from the managed nodes, wherein the
availability data is indicative of a determination by each of the
managed nodes as to an availability of the managed node to receive
an additional workload, and receive telemetry data from the managed
nodes, wherein the telemetry data is indicative of resource
utilization by each of the managed nodes as the workloads are
performed; wherein the resource manager circuitry is further to
determine, as a function of the availability data, a reduced set of
available managed nodes for analysis, determine, as a function of
the telemetry data, adjustments to the workload assignments to
increase the resource utilization among the reduced set of managed
nodes, and apply the determined adjustments to the reduced set of
managed nodes as the workloads are performed.
[0122] Example 32 includes the subject matter of Example 31, and
wherein to assign the workloads comprises to assign a priority to
one or more of the workloads.
[0123] Example 33 includes the subject matter of any of Examples 31
and 32, and wherein to assign a priority to one or more of the
workloads comprises to assign a deterministic execution priority to
one or more of the workloads.
[0124] Example 34 includes the subject matter of any of Examples
31-33, and wherein to assign the workloads comprises to generate
availability data as a function of the assignment of the
workloads.
[0125] Example 35 includes the subject matter of any of Examples
31-34, and wherein to determine, as a function of the telemetry
data, adjustments to the workload assignments comprises to
generate, as a function of the telemetry data, data analytics as
the workloads are performed.
[0126] Example 36 includes the subject matter of any of Examples
31-35, and wherein to generate data analytics comprises to limit
the generation of the data analytics to the reduced set of managed
nodes.
[0127] Example 37 includes the subject matter of any of Examples
31-36, and wherein to generate data analytics comprises to identify
trends in resource utilization of the workloads performed by the
managed nodes in the reduced set of managed nodes.
[0128] Example 38 includes the subject matter of any of Examples
31-37, and wherein to generate data analytics comprises to generate
profiles of the workloads performed by the managed nodes in the
reduced set of managed nodes.
[0129] Example 39 includes the subject matter of any of Examples
31-38, and wherein to generate data analytics comprises to predict
future resource utilization of the workloads performed by the
managed nodes in the reduced set of managed nodes.
[0130] Example 40 includes the subject matter of any of Examples
31-39, and further including policy manager circuitry to obtain
policy data indicative of one or more goals to be achieved in the
management of the workloads, wherein the resource manager circuitry
is further to modify the adjustments as a function of the policy
data.
[0131] Example 41 includes the subject matter of any of Examples
31-40, and wherein to determine the adjustments comprises to
determine one or more node-specific adjustments indicative of
changes to an availability of one or more resources of a managed
node in the reduced set of managed nodes to one or more of the
workloads performed by the managed node.
[0132] Example 42 includes the subject matter of any of Examples
31-41, and wherein to determine the node-specific adjustments
comprises to determine at least one of a processor throttle
adjustment, a memory usage adjustment, a network bandwidth
adjustment, or a fan speed adjustment.
[0133] Example 43 includes the subject matter of any of Examples
31-42, and wherein to apply the determined adjustments comprises to
issue a request to perform a live migration of a workload between
the managed nodes.
[0134] Example 44 includes the subject matter of any of Examples
31-43, and wherein to apply the determined adjustments comprises to
issue a request to one of the managed nodes to apply one or more
node-specific adjustments indicative of changes to an availability
of one or more resources of the managed node to one or more of the
workloads performed by the managed node.
[0135] Example 45 includes an orchestrator server to manage
workloads among a plurality of managed nodes coupled to a network,
the orchestrator server comprising circuitry for assigning
workloads managed nodes; circuitry for receiving availability data
from the managed nodes, wherein the availability data is indicative
of a determination by each of the managed nodes as to an
availability of the managed node to receive an additional workload;
circuitry for receiving telemetry data from the managed nodes,
wherein the telemetry data is indicative of resource utilization by
each of the managed nodes as the workloads are performed; means for
determining, as a function of the availability data, a reduced set
of available managed nodes for analysis; means for determining, as
a function of the telemetry data, adjustments to the workload
assignments to increase the resource utilization among the reduced
set of managed nodes; and means for applying the determined
adjustments to the reduced set of managed nodes as the workloads
are performed.
[0136] Example 46 includes the subject matter of Example 45, and
wherein the circuitry for assigning the workloads comprises
circuitry for assigning a priority to one or more of the
workloads.
[0137] Example 47 includes the subject matter of any of Examples 45
and 46, and wherein the circuitry for assigning a priority to one
or more of the workloads comprises to assign a deterministic
execution priority to one or more of the workloads.
[0138] Example 48 includes the subject matter of any of Examples
45-47, and wherein the circuitry for assigning the workloads
comprises circuitry for generating availability data as a function
of the assignment of the workloads.
[0139] Example 49 includes the subject matter of any of Examples
45-48, and wherein the means for determining, as a function of the
telemetry data, adjustments to the workload assignments comprises
means for generating, as a function of the telemetry data, data
analytics as the workloads are performed.
[0140] Example 50 includes the subject matter of any of Examples
45-49, and wherein the means for generating data analytics
comprises means for limiting the generation of the data analytics
to the reduced set of managed nodes.
[0141] Example 51 includes the subject matter of any of Examples
45-50, and wherein the means for generating data analytics
comprises means for identifying trends in resource utilization of
the workloads performed by the managed nodes in the reduced set of
managed nodes.
[0142] Example 52 includes the subject matter of any of Examples
45-51, and wherein the means for generating data analytics
comprises means for generating profiles of the workloads performed
by the managed nodes in the reduced set of managed nodes.
[0143] Example 53 includes the subject matter of any of Examples
45-52, and wherein the means for generating data analytics means
for predicting future resource utilization of the workloads
performed by the managed nodes in the reduced set of managed
nodes.
[0144] Example 54 includes the subject matter of any of Examples
45-53, and further including circuitry for obtaining policy data
indicative of one or more goals to be achieved in the management of
the workloads; and means for modifying the adjustments as a
function of the policy data.
[0145] Example 55 includes the subject matter of any of Examples
45-54, and wherein the means for determining the adjustments
comprises means for determining one or more node-specific
adjustments indicative of changes to an availability of one or more
resources of a managed node in the reduced set of managed nodes to
one or more of the workloads performed by the managed node.
[0146] Example 56 includes the subject matter of any of Examples
45-55, and wherein the means for determining the node-specific
adjustments comprises means for determining at least one of a
processor throttle adjustment, a memory usage adjustment, a network
bandwidth adjustment, or a fan speed adjustment.
[0147] Example 57 includes the subject matter of any of Examples
45-56, and wherein the means for applying the determined
adjustments comprises means for issuing a request to perform a live
migration of a workload between the managed nodes.
[0148] Example 58 includes the subject matter of any of Examples
45-57, and wherein the means for applying the determined
adjustments comprises means for issuing a request to one of the
managed nodes to apply one or more node-specific adjustments
indicative of changes to an availability of one or more resources
of the managed node to one or more of the workloads performed by
the managed node.
[0149] Example 59 includes a managed node for providing
availability data to an orchestrator server, the managed node
comprising one or more processors; communication circuitry coupled
to the one or more processors; one or more memory devices having
stored therein a plurality of instructions that, when executed by
the one or more processors, cause the managed node to receive a
workload from the orchestrator server; generate telemetry data
indicative of resource utilization as the workload is performed;
compare the telemetry data to one or more predefined thresholds to
provide availability data indicative of an availability of the
managed node to receive an additional workload; report the
availability data to be used by the orchestrator server to adjust
workload assignments.
[0150] Example 60 includes the subject matter of Example 59, and
wherein to report the availability data comprises to report the
availability data based on a foraging algorithm.
[0151] Example 61 includes the subject matter of any of Examples 59
and 60, and wherein the plurality of instructions, when executed by
the one or more processors, further cause the managed node to
receive an indication of a priority of the workload from the
orchestrator server; and wherein to compare the telemetry data to
the one or more predefined thresholds comprises to select at least
one predefined threshold as a function of the priority of the
workload.
[0152] Example 62 includes the subject matter of any of Examples
59-61, and wherein to receive an indication of a priority comprises
to receive an indication that the workload is to be executed
deterministically; and to select at least one predefined threshold
comprises to select at least one threshold associated with
deterministic execution.
[0153] Example 63 includes the subject matter of any of Examples
59-62, and wherein to compare the telemetry data to the one or more
predefined thresholds comprises to compare processor utilization
data to a processor availability threshold.
[0154] Example 64 includes the subject matter of any of Examples
59-63, and wherein to compare the telemetry data to the one or more
predefined thresholds comprises to compare memory utilization data
to a memory availability threshold.
[0155] Example 65 includes the subject matter of any of Examples
59-64, and wherein to report the availability data comprises to
report the availability data to another managed node to be reported
to the orchestrator server.
[0156] Example 66 includes the subject matter of any of Examples
59-65, and wherein to report the availability data comprises to
report the availability data directly to the orchestrator
server.
[0157] Example 67 includes the subject matter of any of Examples
59-66, and wherein the plurality of instructions, when executed by
the one or more processors, further cause the managed node to
receive additional availability data from at least one other
managed node; and to report the availability data comprises to
report the generated availability data and the additional
availability data to the orchestrator server.
[0158] Example 68 includes the subject matter of any of Examples
59-67, and wherein to receive additional availability data from at
least one other managed node comprises to receive additional
availability data from at least one other managed node with a
predefined relationship to the managed node.
[0159] Example 69 includes the subject matter of any of Examples
59-68, and wherein to receive additional availability data from at
least one other managed node comprises to receive additional
availability data from at least one other managed node identified
in a predefined set of managed nodes.
[0160] Example 70 includes the subject matter of any of Examples
59-69, and wherein to receive additional availability data from at
least one other managed node comprises to receive additional
availability data from at least one other managed node within a
predefined proximity of the managed node.
[0161] Example 71 includes the subject matter of any of Examples
59-70, and wherein the plurality of instructions, when executed by
the one or more processors, further cause the managed node to
receive a request for the availability data from the orchestrator
server; and wherein to report the availability data comprises to
report, in response to the request, the availability data.
[0162] Example 72 includes the subject matter of any of Examples
59-71, and wherein the plurality of instructions, when executed by
the one or more processors, further cause the managed node to
receive a request for the availability data from another managed
node; and wherein to report the availability data comprises to
report, in response to the request, the availability data.
[0163] Example 73 includes the subject matter of any of Examples
59-72, and wherein the plurality of instructions, when executed by
the one or more processors, further cause the managed node to
receive node-specific adjustments from the orchestrator server,
wherein the node-specific adjustments are indicative of at least
one of a processor throttle adjustment, a memory usage adjustment,
a network bandwidth adjustment, or a fan speed adjustment; and
execute the workload with the node-specific adjustments.
[0164] Example 74 includes a method for providing availability data
to an orchestrator server, the method comprising receiving, by a
managed node, a workload from the orchestrator server; generating,
by the managed node, telemetry data indicative of resource
utilization as the workload is performed; comparing, by the managed
node, the telemetry data to one or more predefined thresholds to
provide availability data indicative of an availability of the
managed node to receive an additional workload; and reporting, by
the managed node, the availability data to be used by the
orchestrator server to adjust workload assignments.
[0165] Example 75 includes the subject matter of Example 74, and
wherein reporting the availability data comprises reporting the
availability data based on a foraging algorithm
[0166] Example 76 includes the subject matter of any of Examples 74
and 75, and further including receiving, by the managed node, an
indication of a priority of the workload from the orchestrator
server; and wherein comparing the telemetry data to the one or more
predefined thresholds comprises selecting at least one predefined
threshold as a function of the priority of the workload.
[0167] Example 77 includes the subject matter of any of Examples
74-76, and wherein receiving an indication of a priority comprises
receiving an indication that the workload is to be executed
deterministically; and selecting at least one predefined threshold
comprises to select at least one threshold associated with
deterministic execution.
[0168] Example 78 includes the subject matter of any of Examples
74-77, and wherein comparing the telemetry data to the one or more
predefined thresholds comprises comparing processor utilization
data to a processor availability threshold.
[0169] Example 79 includes the subject matter of any of Examples
74-78, and wherein comparing the telemetry data to the one or more
predefined thresholds comprises comparing memory utilization data
to a memory availability threshold.
[0170] Example 80 includes the subject matter of any of Examples
74-79, and wherein reporting the availability data comprises
reporting the availability data to another managed node to be
reported to the orchestrator server.
[0171] Example 81 includes the subject matter of any of Examples
74-80, and wherein reporting the availability data comprises
reporting the availability data directly to the orchestrator
server.
[0172] Example 82 includes the subject matter of any of Examples
74-81, and further including receiving, by the managed node,
additional availability data from at least one other managed node;
and reporting the availability data comprises reporting the
generated availability data and the additional availability data to
the orchestrator server.
[0173] Example 83 includes the subject matter of any of Examples
74-82, and wherein receiving additional availability data from at
least one other managed node comprises receiving additional
availability data from at least one other managed node with a
predefined relationship to the managed node.
[0174] Example 84 includes the subject matter of any of Examples
74-83, and wherein receiving additional availability data from at
least one other managed node comprises receiving additional
availability data from at least one other managed node identified
in a predefined set of managed nodes.
[0175] Example 85 includes the subject matter of any of Examples
74-84, and wherein receiving additional availability data from at
least one other managed node comprises receiving additional
availability data from at least one other managed node within a
predefined proximity of the managed node.
[0176] Example 86 includes the subject matter of any of Examples
74-85, and further including receiving, by the managed node, a
request for the availability data from the orchestrator server; and
wherein reporting the availability data comprises reporting, in
response to the request, the availability data.
[0177] Example 87 includes the subject matter of any of Examples
74-86, and further including receiving, by the managed node, a
request for the availability data from another managed node; and
wherein reporting the availability data comprises reporting, in
response to the request, the availability data.
[0178] Example 88 includes the subject matter of any of Examples
74-87, and further including receiving, by the managed node,
node-specific adjustments from the orchestrator server, wherein the
node-specific adjustments are indicative of at least one of a
processor throttle adjustment, a memory usage adjustment, a network
bandwidth adjustment, or a fan speed adjustment; and executing, by
the managed node, the workload with the node-specific
adjustments.
[0179] Example 89 includes one or more machine-readable storage
media comprising a plurality of instructions stored thereon that in
response to being executed, cause a managed node to perform the
method of any of Examples 59-88.
[0180] Example 90 includes a managed node for providing
availability data to an orchestrator server, the managed node
comprising one or more processors; communication circuitry coupled
to the one or more processors; one or more memory devices having
stored therein a plurality of instructions that, when executed by
the one or more processors, cause the managed node to perform the
method of any of Examples 59-88.
[0181] Example 91 includes a managed node for providing
availability data to an orchestrator server, the managed node
comprising workload executor circuitry to receive a workload from
the orchestrator server; telemetry data generator circuitry to
generate telemetry data indicative of resource utilization as the
workload is performed; and availability data manager circuitry to
compare the telemetry data to one or more predefined thresholds to
provide availability data indicative of an availability of the
managed node to receive an additional workload, and report the
availability data to be used by the orchestrator server to adjust
workload assignments.
[0182] Example 92 includes the subject matter of Example 91, and
wherein to report the availability data comprises to report the
availability data based on a foraging algorithm.
[0183] Example 93 includes the subject matter of any of Examples 91
and 92, and wherein the workload executor circuitry is further to
receive an indication of a priority of the workload from the
orchestrator server, and wherein to compare the telemetry data to
the one or more predefined thresholds comprises to select at least
one predefined threshold as a function of the priority of the
workload.
[0184] Example 94 includes the subject matter of any of Examples
91-93, and wherein to receive an indication of a priority comprises
to receive an indication that the workload is to be executed
deterministically; and to select at least one predefined threshold
comprises to select at least one threshold associated with
deterministic execution.
[0185] Example 95 includes the subject matter of any of Examples
91-94, and wherein to compare the telemetry data to the one or more
predefined thresholds comprises to compare processor utilization
data to a processor availability threshold.
[0186] Example 96 includes the subject matter of any of Examples
91-95, and wherein to compare the telemetry data to the one or more
predefined thresholds comprises to compare memory utilization data
to a memory availability threshold.
[0187] Example 97 includes the subject matter of any of Examples
91-96, and wherein to report the availability data comprises to
report the availability data to another managed node to be reported
to the orchestrator server.
[0188] Example 98 includes the subject matter of any of Examples
91-97, and wherein to report the availability data comprises to
report the availability data directly to the orchestrator
server.
[0189] Example 99 includes the subject matter of any of Examples
91-98, and wherein the availability data manager is further to
receive additional availability data from at least one other
managed node, and wherein to report the availability data comprises
to report the generated availability data and the additional
availability data to the orchestrator server.
[0190] Example 100 includes the subject matter of any of Examples
91-99, and wherein to receive additional availability data from at
least one other managed node comprises to receive additional
availability data from at least one other managed node with a
predefined relationship to the managed node.
[0191] Example 101 includes the subject matter of any of Examples
91-100, and wherein to receive additional availability data from at
least one other managed node comprises to receive additional
availability data from at least one other managed node identified
in a predefined set of managed nodes.
[0192] Example 102 includes the subject matter of any of Examples
91-101, and wherein to receive additional availability data from at
least one other managed node comprises to receive additional
availability data from at least one other managed node within a
predefined proximity of the managed node.
[0193] Example 103 includes the subject matter of any of Examples
91-102, and wherein the availability data manager is further to
receive a request for the availability data from the orchestrator
server, and wherein to report the availability data comprises to
report, in response to the request, the availability data.
[0194] Example 104 includes the subject matter of any of Examples
91-103, and wherein the availability data manager circuitry is
further to receive a request for the availability data from another
managed node, and wherein to report the availability data comprises
to report, in response to the request, the availability data.
[0195] Example 105 includes the subject matter of any of Examples
91-104, and wherein the workload executor circuitry is further to
receive node-specific adjustments from the orchestrator server,
wherein the node-specific adjustments are indicative of at least
one of a processor throttle adjustment, a memory usage adjustment,
a network bandwidth adjustment, or a fan speed adjustment; and
execute the workload with the node-specific adjustments.
[0196] Example 106 includes a managed node for providing
availability data to an orchestrator server, the managed node
comprising circuitry for receiving a workload from the orchestrator
server; means for generating telemetry data indicative of resource
utilization as the workload is performed; means for comparing the
telemetry data to one or more predefined thresholds to provide
availability data indicative of an availability of the managed node
to receive an additional workload; means for reporting the
availability data to be used by the orchestrator server to adjust
workload assignments.
[0197] Example 107 includes the subject matter of Example 106, and
wherein the means for reporting the availability data comprises
means for reporting the availability data based on a foraging
algorithm
[0198] Example 108 includes the subject matter of any of Examples
106 and 107, and further including circuitry for receiving an
indication of a priority of the workload from the orchestrator
server; and wherein the means for comparing the telemetry data to
the one or more predefined thresholds comprises means for selecting
at least one predefined threshold as a function of the priority of
the workload.
[0199] Example 109 includes the subject matter of any of Examples
106-108, and wherein the circuitry for receiving an indication of a
priority comprises circuitry for receiving an indication that the
workload is to be executed deterministically; and wherein the means
for selecting at least one predefined threshold comprises means for
selecting at least one threshold associated with deterministic
execution.
[0200] Example 110 includes the subject matter of any of Examples
106-109, and wherein the means for comparing the telemetry data to
the one or more predefined thresholds comprises means for comparing
processor utilization data to a processor availability
threshold.
[0201] Example 111 includes the subject matter of any of Examples
106-110, and wherein the means for compare the telemetry data to
the one or more predefined thresholds comprises means for comparing
memory utilization data to a memory availability threshold.
[0202] Example 112 includes the subject matter of any of Examples
106-111, and wherein the means for reporting the availability data
comprises means for reporting the availability data to another
managed node to be reported to the orchestrator server.
[0203] Example 113 includes the subject matter of any of Examples
106-112, and wherein the means for reporting the availability data
comprises means for reporting the availability data directly to the
orchestrator server.
[0204] Example 114 includes the subject matter of any of Examples
106-113, and further including circuitry for receiving additional
availability data from at least one other managed node; and the
means for reporting the availability data comprises means for
reporting the generated availability data and the additional
availability data to the orchestrator server.
[0205] Example 115 includes the subject matter of any of Examples
106-114, and wherein the circuitry for receiving additional
availability data from at least one other managed node comprises
circuitry for receiving additional availability data from at least
one other managed node with a predefined relationship to the
managed node.
[0206] Example 116 includes the subject matter of any of Examples
106-115, and wherein the circuitry for receiving additional
availability data from at least one other managed node comprises
circuitry for receiving additional availability data from at least
one other managed node identified in a predefined set of managed
nodes.
[0207] Example 117 includes the subject matter of any of Examples
106-116, and wherein the circuitry for receiving additional
availability data from at least one other managed node comprises
circuitry for receiving additional availability data from at least
one other managed node within a predefined proximity of the managed
node.
[0208] Example 118 includes the subject matter of any of Examples
106-117, and further including circuitry for receiving a request
for the availability data from the orchestrator server; and wherein
the means for reporting the availability data comprises means for
reporting, in response to the request, the availability data.
[0209] Example 119 includes the subject matter of any of Examples
106-118, and further including circuitry for receiving a request
for the availability data from another managed node; and wherein
the means for reporting the availability data comprises means for
reporting, in response to the request, the availability data.
[0210] Example 120 includes the subject matter of any of Examples
106-119, and further including circuitry for receiving
node-specific adjustments from the orchestrator server, wherein the
node-specific adjustments are indicative of at least one of a
processor throttle adjustment, a memory usage adjustment, a network
bandwidth adjustment, or a fan speed adjustment; and means for
executing the workload with the node-specific adjustments.
* * * * *