U.S. patent application number 15/407329 was filed with the patent office on 2018-01-25 for technologies for managing allocation of accelerator resources.
The applicant listed for this patent is Intel Corporation. Invention is credited to Susanne M. Balle, Rahul Khanna.
Application Number | 20180024861 15/407329 |
Document ID | / |
Family ID | 60804962 |
Filed Date | 2018-01-25 |
United States Patent
Application |
20180024861 |
Kind Code |
A1 |
Balle; Susanne M. ; et
al. |
January 25, 2018 |
TECHNOLOGIES FOR MANAGING ALLOCATION OF ACCELERATOR RESOURCES
Abstract
Technologies for dynamically managing the allocation of
accelerator resources include an orchestrator server. The
orchestrator server is to assign a workload to a managed node for
execution, determine a predicted demand for one or more accelerator
resources to accelerate the execution of one or more jobs within
the workload, provision, prior to the predicted demand, one or more
accelerator resources to accelerate the one or more jobs, and
allocate the one or more provisioned accelerator resources to the
managed node to accelerate the execution of the one or more jobs.
Other embodiments are also described and claimed.
Inventors: |
Balle; Susanne M.; (Hudson,
NH) ; Khanna; Rahul; (Portland, OR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Intel Corporation |
Santa Clara |
CA |
US |
|
|
Family ID: |
60804962 |
Appl. No.: |
15/407329 |
Filed: |
January 17, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62365969 |
Jul 22, 2016 |
|
|
|
62376859 |
Aug 18, 2016 |
|
|
|
62427268 |
Nov 29, 2016 |
|
|
|
Current U.S.
Class: |
718/104 |
Current CPC
Class: |
G05D 23/1921 20130101;
G06F 12/109 20130101; G06F 15/161 20130101; H03M 7/30 20130101;
H03M 7/40 20130101; H04L 47/765 20130101; H04Q 2011/0041 20130101;
H05K 7/1422 20130101; H05K 2201/10159 20130101; G06F 3/065
20130101; G06F 13/4022 20130101; G11C 5/02 20130101; H03M 7/3084
20130101; H04L 49/15 20130101; H04L 67/1004 20130101; H05K 7/1487
20130101; G06F 3/0631 20130101; G06F 3/0647 20130101; G06Q 10/06314
20130101; H04L 41/024 20130101; Y02P 90/30 20151101; H03M 7/3086
20130101; H04L 43/0876 20130101; H04L 67/1097 20130101; B65G 1/0492
20130101; G06F 3/0611 20130101; G06F 1/20 20130101; G08C 17/02
20130101; G11C 11/56 20130101; H04L 12/2809 20130101; H04L 29/12009
20130101; H04L 47/782 20130101; H04L 49/555 20130101; H05K 7/1492
20130101; G06F 9/4881 20130101; G06F 9/5044 20130101; G06F
2212/1041 20130101; G06F 2212/401 20130101; H04L 41/082 20130101;
H04L 41/145 20130101; H04Q 11/0062 20130101; H05K 7/20727 20130101;
G06F 13/161 20130101; G06F 13/4068 20130101; G11C 7/1072 20130101;
H05K 7/1461 20130101; H05K 7/20745 20130101; B25J 15/0014 20130101;
G02B 6/3893 20130101; H03M 7/4081 20130101; H04L 9/0643 20130101;
H04L 41/12 20130101; H05K 1/181 20130101; G06F 11/141 20130101;
G06F 13/1668 20130101; H04Q 2011/0052 20130101; G06F 2212/152
20130101; G06Q 10/06 20130101; G06Q 50/04 20130101; H04L 9/3263
20130101; H04L 43/08 20130101; G05D 23/2039 20130101; G06F 3/0683
20130101; G06F 3/0688 20130101; G06F 12/0893 20130101; G11C 14/0009
20130101; H04L 47/38 20130101; H05K 7/20736 20130101; G06F 3/064
20130101; G06F 3/0653 20130101; G06F 3/0664 20130101; G06F 9/505
20130101; G06F 9/5072 20130101; G06F 2212/202 20130101; H03M 7/4031
20130101; H03M 7/6023 20130101; H04B 10/25 20130101; H04Q 1/09
20130101; H05K 2201/10121 20130101; Y10S 901/01 20130101; G02B
6/3897 20130101; G02B 6/4292 20130101; G06F 3/0616 20130101; G06Q
10/20 20130101; H04L 41/0813 20130101; G06F 13/1694 20130101; H04L
49/45 20130101; H04L 67/1029 20130101; H04Q 1/04 20130101; H04Q
11/0005 20130101; H05K 1/0203 20130101; H04L 67/1008 20130101; H04L
69/04 20130101; G06F 9/30036 20130101; G06F 2209/5019 20130101;
G06Q 10/087 20130101; H03M 7/6005 20130101; G06F 16/9014 20190101;
H04L 9/3247 20130101; H04L 41/147 20130101; H04L 47/805 20130101;
H04L 49/35 20130101; H04L 67/34 20130101; H04Q 2011/0086 20130101;
H04Q 2213/13527 20130101; H05K 7/1498 20130101; G06F 1/183
20130101; G06F 8/65 20130101; G06F 9/5077 20130101; G06F 15/8061
20130101; H04L 67/12 20130101; H05K 7/1485 20130101; H05K 7/20836
20130101; G06F 9/5016 20130101; G06F 2212/7207 20130101; H04L
67/1012 20130101; H04L 69/329 20130101; H04Q 2213/13523 20130101;
H05K 5/0204 20130101; G02B 6/3882 20130101; G06F 9/544 20130101;
G11C 5/06 20130101; G06F 3/0655 20130101; G06F 12/10 20130101; G06F
13/409 20130101; G06F 2212/1024 20130101; H04L 9/14 20130101; H04L
43/0894 20130101; H04L 43/16 20130101; H04L 49/00 20130101; H05K
7/1442 20130101; H05K 7/2039 20130101; H05K 7/1421 20130101; H05K
2201/066 20130101; G06F 3/0613 20130101; G06F 9/4401 20130101; G06F
12/0862 20130101; G06F 13/42 20130101; G06F 2209/5022 20130101;
G08C 2200/00 20130101; H04Q 11/0003 20130101; H04W 4/023 20130101;
H05K 7/1447 20130101; G06F 3/0625 20130101; G06F 3/067 20130101;
G06F 3/0673 20130101; G06F 2209/483 20130101; H04B 10/25891
20200501; G06F 3/0665 20130101; G07C 5/008 20130101; H04L 49/25
20130101; H04L 67/02 20130101; H04L 67/16 20130101; H04L 41/0896
20130101; H04L 45/02 20130101; H04Q 11/0071 20130101; H05K 7/1489
20130101; H05K 7/1491 20130101; G06F 9/5027 20130101; G06F 11/3414
20130101; G06F 13/4282 20130101; G06F 2212/1008 20130101; G06F
2212/1044 20130101; G06F 2212/402 20130101; H04L 67/10 20130101;
H04W 4/80 20180201; H05K 7/1418 20130101; H05K 7/20709 20130101;
H05K 2201/10189 20130101; G06F 13/385 20130101; H03M 7/4056
20130101; G06F 3/061 20130101; G06F 3/0658 20130101; G06F 3/0689
20130101; H04L 67/306 20130101; H04Q 2011/0037 20130101; H04Q
2011/0073 20130101; H05K 13/0486 20130101; G06F 3/0659 20130101;
G06F 9/3887 20130101; H04L 43/065 20130101; H04L 49/357 20130101;
H04L 67/1014 20130101; H04L 67/1034 20130101; H04Q 2011/0079
20130101; Y02D 10/00 20180101; G06F 3/0638 20130101; H04L 41/5019
20130101; H04L 45/52 20130101; H04L 47/82 20130101; G06F 3/0619
20130101; G06F 3/0679 20130101; G06F 12/1408 20130101; H04L 47/24
20130101; H04Q 11/00 20130101; G02B 6/4452 20130101; H04L 41/046
20130101; H04L 43/0817 20130101; H04L 47/823 20130101; Y04S 10/50
20130101 |
International
Class: |
G06F 9/50 20060101
G06F009/50 |
Claims
1. An orchestrator server to dynamically manage the allocation of
accelerator resources, the orchestrator server comprising: one or
more processors; one or more memory devices having stored therein a
plurality of instructions that, when executed by the one or more
processors, cause the orchestrator server to: assign a workload to
a managed node for execution; determine a predicted demand for one
or more accelerator resources to accelerate the execution of one or
more jobs within the workload; provision, prior to the predicted
demand, one or more accelerator resources to accelerate the one or
more jobs; and allocate the one or more provisioned accelerator
resources to the managed node to accelerate the execution of the
one or more jobs.
2. The orchestrator server of claim 1, wherein to determine the
predicted demand comprises to determine a demand for one or more
field programmable gate arrays (FPGAs).
3. The orchestrator server of claim 2, wherein to provision the one
or more accelerator resources comprises to provide, to the one or
more FPGAs, a bit stream indicative of a configuration of each FPGA
to accelerate execution of the one or more jobs.
4. The orchestrator server of claim 1, wherein to determine the
predicted demand comprises to determine the number of accelerator
resources to allocate to satisfy the predicted demand.
5. The orchestrator server of claim 1, wherein to provision the one
or more accelerator resources comprises to provision one or more
accelerator resources located on one or more sleds that are
different than a sled on which the workload is presently
executed.
6. The orchestrator server of claim 1, wherein the plurality of
instructions, when executed, further cause the orchestrator server
to: determine a configuration time period to provision each of the
one or more accelerator resources; and determine a predicted time
of the predicted demand; and wherein to provision the one or more
accelerator resources comprises to begin configuration of the one
or more accelerator resources for accelerated execution of the one
or more jobs at a time that is earlier than the predicted time by
at least the configuration time period.
7. The orchestrator server of claim 1, wherein the plurality of
instructions, when executed, further cause the orchestrator server
to: identify one or more jobs within the workload to be accelerated
with one or more field programmable gate arrays (FPGAs); associate
each identified job with a globally unique identifier indicative of
one or more of a specific interface of the job or a definition of
the job.
8. The orchestrator server of claim 7, wherein to associate each
identified job with a globally unique identifier comprises to
associate each identified job with a globally unique identifier
indicative of one or more of a size of an input or a format of an
input to the job.
9. The orchestrator server of claim 1, wherein the managed node is
one of a plurality of managed nodes and the workload is one of a
plurality of workloads executed by the managed nodes and the
plurality of instructions, when executed, further cause the
orchestrator server to: determine, for each workload, a local count
indicative of a number of times a job is executed in each workload;
determine a global count indicative of a number of times a job is
executed by all of the managed nodes; determine whether one or more
of the local count or the global count satisfies a threshold count
value; and identify, in response to a determination that one or
more of the local count or the global count satisfies the threshold
count value, the associated job as a job to be accelerated.
10. The orchestrator server of claim 9, wherein the plurality of
instructions, when executed, further cause the orchestrator server
to identify, from a plurality of accelerator resources, the one or
more accelerator resources to accelerate the one or more jobs.
11. The orchestrator server of claim 10, wherein to identify the
one or more accelerator resources comprises to determine whether
one or more of the accelerator resources is already configured to
perform one or more of the jobs; and select, in response to a
determination that one or more the accelerator resources is already
configured to perform one or more of the jobs, the one or more
already-configured accelerator resources for acceleration of the
one or more jobs.
12. The orchestrator server of claim 10, wherein to identify the
one or more accelerator resources comprises to select the one or
more accelerator resources as a function of one or more of a target
heat generation, a target power usage, or a target economic cost of
utilization of the one or more accelerator resources.
13. The orchestrator server of claim 1, wherein the managed node is
one of a plurality of managed nodes and the workload is one of a
plurality of workloads executed by the managed nodes, and wherein
to determine the demand comprises to: establish a job queue
indicative of all jobs for all of the workloads to be performed;
determine an average time period in which each job resides in the
job queue; and determine the demand for each job as a function of
the average time period for each job.
14. The orchestrator server of claim 13, wherein to determine the
demand for each job further comprises to apply an exponential
averaging algorithm to the time period in which each job resides in
the job queue.
15. One or more machine-readable storage media comprising a
plurality of instructions stored thereon that, in response to being
executed, cause an orchestrator server to: assign a workload to a
managed node for execution; determine a predicted demand for one or
more accelerator resources to accelerate the execution of one or
more jobs within the workload; provision, prior to the predicted
demand, one or more accelerator resources to accelerate the one or
more jobs; and allocate the one or more provisioned accelerator
resources to the managed node to accelerate the execution of the
one or more jobs.
16. The one or more machine-readable storage media of claim 15,
wherein to determine the predicted demand comprises to determine a
demand for one or more field programmable gate arrays (FPGAs).
17. The one or more machine-readable storage media of claim 16,
wherein to provision the one or more accelerator resources
comprises to provide, to the one or more FPGAs, a bit stream
indicative of a configuration of each FPGA to accelerate execution
of the one or more jobs.
18. The one or more machine-readable storage media of claim 15,
wherein to determine the predicted demand comprises to determine
the number of accelerator resources to allocate to satisfy the
predicted demand.
19. The one or more machine-readable storage media of claim 15,
wherein to provision the one or more accelerator resources
comprises to provision one or more accelerator resources located on
one or more sleds that are different than a sled on which the
workload is presently executed.
20. The one or more machine-readable storage media of claim 15,
wherein the plurality of instructions, when executed, further cause
the orchestrator server to: determine a configuration time period
to provision each of the one or more accelerator resources; and
determine a predicted time of the predicted demand; and wherein to
provision the one or more accelerator resources comprises to begin
configuration of the one or more accelerator resources for
accelerated execution of the one or more jobs at a time that is
earlier than the predicted time by at least the configuration time
period.
21. The one or more machine-readable storage media of claim 15,
wherein the plurality of instructions, when executed, further cause
the orchestrator server to: identify one or more jobs within the
workload to be accelerated with one or more field programmable gate
arrays (FPGAs); associate each identified job with a globally
unique identifier indicative of one or more of a specific interface
of the job or a definition of the job.
22. The one or more machine-readable storage media of claim 21,
wherein to associate each identified job with a globally unique
identifier comprises to associate each identified job with a
globally unique identifier indicative of one or more of a size of
an input or a format of an input to the job.
23. The one or more machine-readable storage media of claim 15,
wherein the managed node is one of a plurality of managed nodes and
the workload is one of a plurality of workloads executed by the
managed nodes and the plurality of instructions, when executed,
further cause the orchestrator server to: determine, for each
workload, a local count indicative of a number of times a job is
executed in each workload; determine a global count indicative of a
number of times a job is executed by all of the managed nodes;
determine whether one or more of the local count or the global
count satisfies a threshold count value; and identify, in response
to a determination that one or more of the local count or the
global count satisfies the threshold count value, the associated
job as a job to be accelerated.
24. The one or more machine-readable storage media of claim 23,
wherein the plurality of instructions, when executed, further cause
the orchestrator server to identify, from a plurality of
accelerator resources, the one or more accelerator resources to
accelerate the one or more jobs.
25. An orchestrator server to dynamically manage the allocation of
accelerator resources, the orchestrator server comprising:
circuitry for assigning a workload to a managed node for execution;
means for determining a predicted demand for one or more
accelerator resources to accelerate the execution of one or more
jobs within the workload; circuitry for provisioning, by the
orchestrator server and prior to the predicted demand, one or more
accelerator resources to accelerate the one or more jobs; and
circuitry for allocating the one or more provisioned accelerator
resources to the managed node to accelerate the execution of the
one or more jobs.
26. A method for dynamically managing the allocation of accelerator
resources, the method comprising: assigning, by an orchestrator
server, a workload to a managed node for execution; determining, by
the orchestrator server, a predicted demand for one or more
accelerator resources to accelerate the execution of one or more
jobs within the workload; provisioning, by the orchestrator server
and prior to the predicted demand, one or more accelerator
resources to accelerate the one or more jobs; and allocating, by
the orchestrator server, the one or more provisioned accelerator
resources to the managed node to accelerate the execution of the
one or more jobs.
27. The method of claim 26, wherein determining the predicted
demand comprises determining a demand for one or more field
programmable gate arrays (FPGAs).
28. The method of claim 27, wherein provisioning the one or more
accelerator resources comprises providing, to the one or more
FPGAs, a bit stream indicative of a configuration of each FPGA to
accelerate execution of the one or more jobs.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims the benefit of U.S.
Provisional Patent Application No. 62/365,969, filed Jul. 22, 2016,
U.S. Provisional Patent Application No. 62/376,859, filed Aug. 18,
2016, and U.S. Provisional Patent Application No. 62/427,268, filed
Nov. 29, 2016.
BACKGROUND
[0002] In a typical cloud-based computing environment (e.g., a data
center), multiple compute nodes may execute workloads (e.g.,
processes, applications, services, etc.) on behalf of customers.
One or more of the workloads may include sets of functions (e.g.,
jobs), that could be accelerated using accelerator resources such
as field programmable gate arrays (FPGAs), dedicated graphics
processors, or other specialized devices for accelerating specific
types of jobs. In typical data centers, all or a subset of the
compute nodes may be physically equipped (e.g., on the same board
as the central processing unit) with one or more accelerator
resources. However, in such data centers, the accelerator resources
may go unused or may be used only a subset of the time that the
workloads are being executed, as many jobs assigned to the compute
nodes may not include jobs that are amenable to acceleration.
Furthermore, even in data centers in which each compute node is
assembled from resources distributed across the data center when a
workload is assigned to the compute node, information regarding
whether the assigned workload may benefit from acceleration may be
unavailable. As such, the compute node may be assembled without the
accelerator resources that could be beneficial to the execution of
the workload, or may be assembled with one or more accelerator
resources that are underutilized (e.g., idle more than a threshold
amount of time) during the execution of the workload. As such, the
allocation of accelerator resources in typical data centers is
problematic and can often result in inefficient use of resources
and, as result, unnecessary costs for the operator of the data
center.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The concepts described herein are illustrated by way of
example and not by way of limitation in the accompanying figures.
For simplicity and clarity of illustration, elements illustrated in
the figures are not necessarily drawn to scale. Where considered
appropriate, reference labels have been repeated among the figures
to indicate corresponding or analogous elements.
[0004] FIG. 1 is a diagram of a conceptual overview of a data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0005] FIG. 2 is a diagram of an example embodiment of a logical
configuration of a rack of the data center of FIG. 1;
[0006] FIG. 3 is a diagram of an example embodiment of another data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0007] FIG. 4 is a diagram of another example embodiment of a data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0008] FIG. 5 is a diagram of a connectivity scheme representative
of link-layer connectivity that may be established among various
sleds of the data centers of FIGS. 1, 3, and 4;
[0009] FIG. 6 is a diagram of a rack architecture that may be
representative of an architecture of any particular one of the
racks depicted in FIGS. 1-4 according to some embodiments;
[0010] FIG. 7 is a diagram of an example embodiment of a sled that
may be used with the rack architecture of FIG. 6;
[0011] FIG. 8 is a diagram of an example embodiment of a rack
architecture to provide support for sleds featuring expansion
capabilities;
[0012] FIG. 9 is a diagram of an example embodiment of a rack
implemented according to the rack architecture of FIG. 8;
[0013] FIG. 10 is a diagram of an example embodiment of a sled
designed for use in conjunction with the rack of FIG. 9;
[0014] FIG. 11 is a diagram of an example embodiment of a data
center in which one or more techniques described herein may be
implemented according to various embodiments;
[0015] FIG. 12 is a simplified block diagram of at least one
embodiment of a system for managing the allocation of accelerator
resources to managed nodes;
[0016] FIG. 13 is a simplified block diagram of at least one
embodiment of an orchestrator server of the system of FIG. 12;
[0017] FIG. 14 is a simplified block diagram of at least one
embodiment of an environment that may be established by the
orchestrator server of FIGS. 12 and 13; and
[0018] FIGS. 15-17 are a simplified flow diagram of at least one
embodiment of a method for managing the allocation of accelerator
resources among managed nodes as the managed nodes execute
workloads, that may be performed by the orchestrator server of
FIGS. 12-14.
DETAILED DESCRIPTION OF THE DRAWINGS
[0019] While the concepts of the present disclosure are susceptible
to various modifications and alternative forms, specific
embodiments thereof have been shown by way of example in the
drawings and will be described herein in detail. It should be
understood, however, that there is no intent to limit the concepts
of the present disclosure to the particular forms disclosed, but on
the contrary, the intention is to cover all modifications,
equivalents, and alternatives consistent with the present
disclosure and the appended claims.
[0020] References in the specification to "one embodiment," "an
embodiment," "an illustrative embodiment," etc., indicate that the
embodiment described may include a particular feature, structure,
or characteristic, but every embodiment may or may not necessarily
include that particular feature, structure, or characteristic.
Moreover, such phrases are not necessarily referring to the same
embodiment. Further, when a particular feature, structure, or
characteristic is described in connection with an embodiment, it is
submitted that it is within the knowledge of one skilled in the art
to effect such feature, structure, or characteristic in connection
with other embodiments whether or not explicitly described.
Additionally, it should be appreciated that items included in a
list in the form of "at least one A, B, and C" can mean (A); (B);
(C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly,
items listed in the form of "at least one of A, B, or C" can mean
(A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and
C).
[0021] The disclosed embodiments may be implemented, in some cases,
in hardware, firmware, software, or any combination thereof. The
disclosed embodiments may also be implemented as instructions
carried by or stored on a transitory or non-transitory
machine-readable (e.g., computer-readable) storage medium, which
may be read and executed by one or more processors. A
machine-readable storage medium may be embodied as any storage
device, mechanism, or other physical structure for storing or
transmitting information in a form readable by a machine (e.g., a
volatile or non-volatile memory, a media disc, or other media
device).
[0022] In the drawings, some structural or method features may be
shown in specific arrangements and/or orderings. However, it should
be appreciated that such specific arrangements and/or orderings may
not be required. Rather, in some embodiments, such features may be
arranged in a different manner and/or order than shown in the
illustrative figures. Additionally, the inclusion of a structural
or method feature in a particular figure is not meant to imply that
such feature is required in all embodiments and, in some
embodiments, may not be included or may be combined with other
features.
[0023] FIG. 1 illustrates a conceptual overview of a data center
100 that may generally be representative of a data center or other
type of computing network in/for which one or more techniques
described herein may be implemented according to various
embodiments. As shown in FIG. 1, data center 100 may generally
contain a plurality of racks, each of which may house computing
equipment comprising a respective set of physical resources. In the
particular non-limiting example depicted in FIG. 1, data center 100
contains four racks 102A to 102D, which house computing equipment
comprising respective sets of physical resources (PCRs) 105A to
105D. According to this example, a collective set of physical
resources 106 of data center 100 includes the various sets of
physical resources 105A to 105D that are distributed among racks
102A to 102D. Physical resources 106 may include resources of
multiple types, such as--for example--processors, co-processors,
accelerators, field programmable gate arrays (FPGAs), memory, and
storage. The embodiments are not limited to these examples.
[0024] The illustrative data center 100 differs from typical data
centers in many ways. For example, in the illustrative embodiment,
the circuit boards ("sleds") on which components such as CPUs,
memory, and other components are placed are designed for increased
thermal performance In particular, in the illustrative embodiment,
the sleds are shallower than typical boards. In other words, the
sleds are shorter from the front to the back, where cooling fans
are located. This decreases the length of the path that air must to
travel across the components on the board. Further, the components
on the sled are spaced further apart than in typical circuit
boards, and the components are arranged to reduce or eliminate
shadowing (i.e., one component in the air flow path of another
component). In the illustrative embodiment, processing components
such as the processors are located on a top side of a sled while
near memory, such as DIMMs, are located on a bottom side of the
sled. As a result of the enhanced airflow provided by this design,
the components may operate at higher frequencies and power levels
than in typical systems, thereby increasing performance.
Furthermore, the sleds are configured to blindly mate with power
and data communication cables in each rack 102A, 102B, 102C, 102D,
enhancing their ability to be quickly removed, upgraded,
reinstalled, and/or replaced. Similarly, individual components
located on the sleds, such as processors, accelerators, memory, and
data storage drives, are configured to be easily upgraded due to
their increased spacing from each other. In the illustrative
embodiment, the components additionally include hardware
attestation features to prove their authenticity.
[0025] Furthermore, in the illustrative embodiment, the data center
100 utilizes a single network architecture ("fabric") that supports
multiple other network architectures including Ethernet and
Omni-Path. The sleds, in the illustrative embodiment, are coupled
to switches via optical fibers, which provide higher bandwidth and
lower latency than typical twisted pair cabling (e.g., Category 5,
Category 5e, Category 6, etc.). Due to the high bandwidth, low
latency interconnections and network architecture, the data center
100 may, in use, pool resources, such as memory, accelerators
(e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage
drives that are physically disaggregated, and provide them to
compute resources (e.g., processors) on an as needed basis,
enabling the compute resources to access the pooled resources as if
they were local. The illustrative data center 100 additionally
receives usage information for the various resources, predicts
resource usage for different types of workloads based on past
resource usage, and dynamically reallocates the resources based on
this information.
[0026] The racks 102A, 102B, 102C, 102D of the data center 100 may
include physical design features that facilitate the automation of
a variety of types of maintenance tasks. For example, data center
100 may be implemented using racks that are designed to be
robotically-accessed, and to accept and house
robotically-manipulatable resource sleds. Furthermore, in the
illustrative embodiment, the racks 102A, 102B, 102C, 102D include
integrated power sources that receive a greater voltage than is
typical for power sources. The increased voltage enables the power
sources to provide additional power to the components on each sled,
enabling the components to operate at higher than typical
frequencies.
[0027] FIG. 2 illustrates an exemplary logical configuration of a
rack 202 of the data center 100. As shown in FIG. 2, rack 202 may
generally house a plurality of sleds, each of which may comprise a
respective set of physical resources. In the particular
non-limiting example depicted in FIG. 2, rack 202 houses sleds
204-1 to 204-4 comprising respective sets of physical resources
205-1 to 205-4, each of which constitutes a portion of the
collective set of physical resources 206 comprised in rack 202.
With respect to FIG. 1, if rack 202 is representative of--for
example--rack 102A, then physical resources 206 may correspond to
the physical resources 105A comprised in rack 102A. In the context
of this example, physical resources 105A may thus be made up of the
respective sets of physical resources, including physical storage
resources 205-1, physical accelerator resources 205-2, physical
memory resources 205-3, and physical compute resources 205-5
comprised in the sleds 204-1 to 204-4 of rack 202. The embodiments
are not limited to this example. Each sled may contain a pool of
each of the various types of physical resources (e.g., compute,
memory, accelerator, storage). By having robotically accessible and
robotically manipulatable sleds comprising disaggregated resources,
each type of resource can be upgraded independently of each other
and at their own optimized refresh rate.
[0028] FIG. 3 illustrates an example of a data center 300 that may
generally be representative of one in/for which one or more
techniques described herein may be implemented according to various
embodiments. In the particular non-limiting example depicted in
FIG. 3, data center 300 comprises racks 302-1 to 302-32. In various
embodiments, the racks of data center 300 may be arranged in such
fashion as to define and/or accommodate various access pathways.
For example, as shown in FIG. 3, the racks of data center 300 may
be arranged in such fashion as to define and/or accommodate access
pathways 311A, 311B, 311C, and 311D. In some embodiments, the
presence of such access pathways may generally enable automated
maintenance equipment, such as robotic maintenance equipment, to
physically access the computing equipment housed in the various
racks of data center 300 and perform automated maintenance tasks
(e.g., replace a failed sled, upgrade a sled). In various
embodiments, the dimensions of access pathways 311A, 311B, 311C,
and 311D, the dimensions of racks 302-1 to 302-32, and/or one or
more other aspects of the physical layout of data center 300 may be
selected to facilitate such automated operations. The embodiments
are not limited in this context.
[0029] FIG. 4 illustrates an example of a data center 400 that may
generally be representative of one in/for which one or more
techniques described herein may be implemented according to various
embodiments. As shown in FIG. 4, data center 400 may feature an
optical fabric 412. Optical fabric 412 may generally comprise a
combination of optical signaling media (such as optical cabling)
and optical switching infrastructure via which any particular sled
in data center 400 can send signals to (and receive signals from)
each of the other sleds in data center 400. The signaling
connectivity that optical fabric 412 provides to any given sled may
include connectivity both to other sleds in a same rack and sleds
in other racks. In the particular non-limiting example depicted in
FIG. 4, data center 400 includes four racks 402A to 402D. Racks
402A to 402D house respective pairs of sleds 404A-1 and 404A-2,
404B-1 and 404B-2, 404C-1 and 404C-2, and 404D-1 and 404D-2. Thus,
in this example, data center 400 comprises a total of eight sleds.
Via optical fabric 412, each such sled may possess signaling
connectivity with each of the seven other sleds in data center 400.
For example, via optical fabric 412, sled 404A-1 in rack 402A may
possess signaling connectivity with sled 404A-2 in rack 402A, as
well as the six other sleds 404B-1, 404B-2, 404C-1, 404C-2, 404D-1,
and 404D-2 that are distributed among the other racks 402B, 402C,
and 402D of data center 400. The embodiments are not limited to
this example.
[0030] FIG. 5 illustrates an overview of a connectivity scheme 500
that may generally be representative of link-layer connectivity
that may be established in some embodiments among the various sleds
of a data center, such as any of example data centers 100, 300, and
400 of FIGS. 1, 3, and 4. Connectivity scheme 500 may be
implemented using an optical fabric that features a dual-mode
optical switching infrastructure 514. Dual-mode optical switching
infrastructure 514 may generally comprise a switching
infrastructure that is capable of receiving communications
according to multiple link-layer protocols via a same unified set
of optical signaling media, and properly switching such
communications. In various embodiments, dual-mode optical switching
infrastructure 514 may be implemented using one or more dual-mode
optical switches 515. In various embodiments, dual-mode optical
switches 515 may generally comprise high-radix switches. In some
embodiments, dual-mode optical switches 515 may comprise multi-ply
switches, such as four-ply switches. In various embodiments,
dual-mode optical switches 515 may feature integrated silicon
photonics that enable them to switch communications with
significantly reduced latency in comparison to conventional
switching devices. In some embodiments, dual-mode optical switches
515 may constitute leaf switches 530 in a leaf-spine architecture
additionally including one or more dual-mode optical spine switches
520.
[0031] In various embodiments, dual-mode optical switches may be
capable of receiving both Ethernet protocol communications carrying
Internet Protocol (IP packets) and communications according to a
second, high-performance computing (HPC) link-layer protocol (e.g.,
Intel's Omni-Path Architecture's, Infiniband) via optical signaling
media of an optical fabric. As reflected in FIG. 5, with respect to
any particular pair of sleds 504A and 504B possessing optical
signaling connectivity to the optical fabric, connectivity scheme
500 may thus provide support for link-layer connectivity via both
Ethernet links and HPC links. Thus, both Ethernet and HPC
communications can be supported by a single high-bandwidth,
low-latency switch fabric. The embodiments are not limited to this
example.
[0032] FIG. 6 illustrates a general overview of a rack architecture
600 that may be representative of an architecture of any particular
one of the racks depicted in FIGS. 1 to 4 according to some
embodiments. As reflected in FIG. 6, rack architecture 600 may
generally feature a plurality of sled spaces into which sleds may
be inserted, each of which may be robotically-accessible via a rack
access region 601. In the particular non-limiting example depicted
in FIG. 6, rack architecture 600 features five sled spaces 603-1 to
603-5. Sled spaces 603-1 to 603-5 feature respective multi-purpose
connector modules (MPCMs) 616-1 to 616-5.
[0033] FIG. 7 illustrates an example of a sled 704 that may be
representative of a sled of such a type. As shown in FIG. 7, sled
704 may comprise a set of physical resources 705, as well as an
MPCM 716 designed to couple with a counterpart MPCM when sled 704
is inserted into a sled space such as any of sled spaces 603-1 to
603-5 of FIG. 6. Sled 704 may also feature an expansion connector
717. Expansion connector 717 may generally comprise a socket, slot,
or other type of connection element that is capable of accepting
one or more types of expansion modules, such as an expansion sled
718. By coupling with a counterpart connector on expansion sled
718, expansion connector 717 may provide physical resources 705
with access to supplemental computing resources 705B residing on
expansion sled 718. The embodiments are not limited in this
context.
[0034] FIG. 8 illustrates an example of a rack architecture 800
that may be representative of a rack architecture that may be
implemented in order to provide support for sleds featuring
expansion capabilities, such as sled 704 of FIG. 7. In the
particular non-limiting example depicted in FIG. 8, rack
architecture 800 includes seven sled spaces 803-1 to 803-7, which
feature respective MPCMs 816-1 to 816-7. Sled spaces 803-1 to 803-7
include respective primary regions 803-1A to 803-7A and respective
expansion regions 803-1B to 803-7B. With respect to each such sled
space, when the corresponding MPCM is coupled with a counterpart
MPCM of an inserted sled, the primary region may generally
constitute a region of the sled space that physically accommodates
the inserted sled. The expansion region may generally constitute a
region of the sled space that can physically accommodate an
expansion module, such as expansion sled 718 of FIG. 7, in the
event that the inserted sled is configured with such a module.
[0035] FIG. 9 illustrates an example of a rack 902 that may be
representative of a rack implemented according to rack architecture
800 of FIG. 8 according to some embodiments. In the particular
non-limiting example depicted in FIG. 9, rack 902 features seven
sled spaces 903-1 to 903-7, which include respective primary
regions 903-1A to 903-7A and respective expansion regions 903-1B to
903-7B. In various embodiments, temperature control in rack 902 may
be implemented using an air cooling system. For example, as
reflected in FIG. 9, rack 902 may feature a plurality of fans 919
that are generally arranged to provide air cooling within the
various sled spaces 903-1 to 903-7. In some embodiments, the height
of the sled space is greater than the conventional "1U" server
height. In such embodiments, fans 919 may generally comprise
relatively slow, large diameter cooling fans as compared to fans
used in conventional rack configurations. Running larger diameter
cooling fans at lower speeds may increase fan lifetime relative to
smaller diameter cooling fans running at higher speeds while still
providing the same amount of cooling. The sleds are physically
shallower than conventional rack dimensions. Further, components
are arranged on each sled to reduce thermal shadowing (i.e., not
arranged serially in the direction of air flow). As a result, the
wider, shallower sleds allow for an increase in device performance
because the devices can be operated at a higher thermal envelope
(e.g., 250 W) due to improved cooling (i.e., no thermal shadowing,
more space between devices, more room for larger heat sinks,
etc.).
[0036] MPCMs 916-1 to 916-7 may be configured to provide inserted
sleds with access to power sourced by respective power modules
920-1 to 920-7, each of which may draw power from an external power
source 921. In various embodiments, external power source 921 may
deliver alternating current (AC) power to rack 902, and power
modules 920-1 to 920-7 may be configured to convert such AC power
to direct current (DC) power to be sourced to inserted sleds. In
some embodiments, for example, power modules 920-1 to 920-7 may be
configured to convert 277-volt AC power into 12-volt DC power for
provision to inserted sleds via respective MPCMs 916-1 to 916-7.
The embodiments are not limited to this example.
[0037] MPCMs 916-1 to 916-7 may also be arranged to provide
inserted sleds with optical signaling connectivity to a dual-mode
optical switching infrastructure 914, which may be the same as--or
similar to--dual-mode optical switching infrastructure 514 of FIG.
5. In various embodiments, optical connectors contained in MPCMs
916-1 to 916-7 may be designed to couple with counterpart optical
connectors contained in MPCMs of inserted sleds to provide such
sleds with optical signaling connectivity to dual-mode optical
switching infrastructure 914 via respective lengths of optical
cabling 922-1 to 922-7. In some embodiments, each such length of
optical cabling may extend from its corresponding MPCM to an
optical interconnect loom 923 that is external to the sled spaces
of rack 902. In various embodiments, optical interconnect loom 923
may be arranged to pass through a support post or other type of
load-bearing element of rack 902. The embodiments are not limited
in this context. Because inserted sleds connect to an optical
switching infrastructure via MPCMs, the resources typically spent
in manually configuring the rack cabling to accommodate a newly
inserted sled can be saved.
[0038] FIG. 10 illustrates an example of a sled 1004 that may be
representative of a sled designed for use in conjunction with rack
902 of FIG. 9 according to some embodiments. Sled 1004 may feature
an MPCM 1016 that comprises an optical connector 1016A and a power
connector 1016B, and that is designed to couple with a counterpart
MPCM of a sled space in conjunction with insertion of MPCM 1016
into that sled space. Coupling MPCM 1016 with such a counterpart
MPCM may cause power connector 1016 to couple with a power
connector comprised in the counterpart MPCM. This may generally
enable physical resources 1005 of sled 1004 to source power from an
external source, via power connector 1016 and power transmission
media 1024 that conductively couples power connector 1016 to
physical resources 1005.
[0039] Sled 1004 may also include dual-mode optical network
interface circuitry 1026. Dual-mode optical network interface
circuitry 1026 may generally comprise circuitry that is capable of
communicating over optical signaling media according to each of
multiple link-layer protocols supported by dual-mode optical
switching infrastructure 914 of FIG. 9. In some embodiments,
dual-mode optical network interface circuitry 1026 may be capable
both of Ethernet protocol communications and of communications
according to a second, high-performance protocol. In various
embodiments, dual-mode optical network interface circuitry 1026 may
include one or more optical transceiver modules 1027, each of which
may be capable of transmitting and receiving optical signals over
each of one or more optical channels. The embodiments are not
limited in this context.
[0040] Coupling MPCM 1016 with a counterpart MPCM of a sled space
in a given rack may cause optical connector 1016A to couple with an
optical connector comprised in the counterpart MPCM. This may
generally establish optical connectivity between optical cabling of
the sled and dual-mode optical network interface circuitry 1026,
via each of a set of optical channels 1025. Dual-mode optical
network interface circuitry 1026 may communicate with the physical
resources 1005 of sled 1004 via electrical signaling media 1028. In
addition to the dimensions of the sleds and arrangement of
components on the sleds to provide improved cooling and enable
operation at a relatively higher thermal envelope (e.g., 250 W), as
described above with reference to FIG. 9, in some embodiments, a
sled may include one or more additional features to facilitate air
cooling, such as a heatpipe and/or heat sinks arranged to dissipate
heat generated by physical resources 1005. It is worthy of note
that although the example sled 1004 depicted in FIG. 10 does not
feature an expansion connector, any given sled that features the
design elements of sled 1004 may also feature an expansion
connector according to some embodiments. The embodiments are not
limited in this context.
[0041] FIG. 11 illustrates an example of a data center 1100 that
may generally be representative of one in/for which one or more
techniques described herein may be implemented according to various
embodiments. As reflected in FIG. 11, a physical infrastructure
management framework 1150A may be implemented to facilitate
management of a physical infrastructure 1100A of data center 1100.
In various embodiments, one function of physical infrastructure
management framework 1150A may be to manage automated maintenance
functions within data center 1100, such as the use of robotic
maintenance equipment to service computing equipment within
physical infrastructure 1100A. In some embodiments, physical
infrastructure 1100A may feature an advanced telemetry system that
performs telemetry reporting that is sufficiently robust to support
remote automated management of physical infrastructure 1100A. In
various embodiments, telemetry information provided by such an
advanced telemetry system may support features such as failure
prediction/prevention capabilities and capacity planning
capabilities. In some embodiments, physical infrastructure
management framework 1150A may also be configured to manage
authentication of physical infrastructure components using hardware
attestation techniques. For example, robots may verify the
authenticity of components before installation by analyzing
information collected from a radio frequency identification (RFID)
tag associated with each component to be installed. The embodiments
are not limited in this context.
[0042] As shown in FIG. 11, the physical infrastructure 1100A of
data center 1100 may comprise an optical fabric 1112, which may
include a dual-mode optical switching infrastructure 1114. Optical
fabric 1112 and dual-mode optical switching infrastructure 1114 may
be the same as--or similar to--optical fabric 412 of FIG. 4 and
dual-mode optical switching infrastructure 514 of FIG. 5,
respectively, and may provide high-bandwidth, low-latency,
multi-protocol connectivity among sleds of data center 1100. As
discussed above, with reference to FIG. 1, in various embodiments,
the availability of such connectivity may make it feasible to
disaggregate and dynamically pool resources such as accelerators,
memory, and storage. In some embodiments, for example, one or more
pooled accelerator sleds 1130 may be included among the physical
infrastructure 1100A of data center 1100, each of which may
comprise a pool of accelerator resources--such as co-processors
and/or FPGAs, for example--that is globally accessible to other
sleds via optical fabric 1112 and dual-mode optical switching
infrastructure 1114.
[0043] In another example, in various embodiments, one or more
pooled storage sleds 1132 may be included among the physical
infrastructure 1100A of data center 1100, each of which may
comprise a pool of storage resources that is available globally
accessible to other sleds via optical fabric 1112 and dual-mode
optical switching infrastructure 1114. In some embodiments, such
pooled storage sleds 1132 may comprise pools of solid-state storage
devices such as solid-state drives (SSDs). In various embodiments,
one or more high-performance processing sleds 1134 may be included
among the physical infrastructure 1100A of data center 1100. In
some embodiments, high-performance processing sleds 1134 may
comprise pools of high-performance processors, as well as cooling
features that enhance air cooling to yield a higher thermal
envelope of up to 250 W or more. In various embodiments, any given
high-performance processing sled 1134 may feature an expansion
connector 1117 that can accept a far memory expansion sled, such
that the far memory that is locally available to that
high-performance processing sled 1134 is disaggregated from the
processors and near memory comprised on that sled. In some
embodiments, such a high-performance processing sled 1134 may be
configured with far memory using an expansion sled that comprises
low-latency SSD storage. The optical infrastructure allows for
compute resources on one sled to utilize remote accelerator/FPGA,
memory, and/or SSD resources that are disaggregated on a sled
located on the same rack or any other rack in the data center. The
remote resources can be located one switch jump away or two-switch
jumps away in the spine-leaf network architecture described above
with reference to FIG. 5. The embodiments are not limited in this
context.
[0044] In various embodiments, one or more layers of abstraction
may be applied to the physical resources of physical infrastructure
1100A in order to define a virtual infrastructure, such as a
software-defined infrastructure 1100B. In some embodiments, virtual
computing resources 1136 of software-defined infrastructure 1100B
may be allocated to support the provision of cloud services 1140.
In various embodiments, particular sets of virtual computing
resources 1136 may be grouped for provision to cloud services 1140
in the form of SDI services 1138. Examples of cloud services 1140
may include--without limitation--software as a service (SaaS)
services 1142, platform as a service (PaaS) services 1144, and
infrastructure as a service (IaaS) services 1146.
[0045] In some embodiments, management of software-defined
infrastructure 1100B may be conducted using a virtual
infrastructure management framework 1150B. In various embodiments,
virtual infrastructure management framework 1150B may be designed
to implement workload fingerprinting techniques and/or
machine-learning techniques in conjunction with managing allocation
of virtual computing resources 1136 and/or SDI services 1138 to
cloud services 1140. In some embodiments, virtual infrastructure
management framework 1150B may use/consult telemetry data in
conjunction with performing such resource allocation. In various
embodiments, an application/service management framework 1150C may
be implemented in order to provide QoS management capabilities for
cloud services 1140. The embodiments are not limited in this
context.
[0046] As shown in FIG. 12, an illustrative system 1210 for
managing the allocation of accelerator resources (e.g., physical
accelerator resources 205-2) among a set of managed nodes 1260
includes an orchestrator server 1240 in communication with the set
of managed nodes 1260. Each managed node 1260 may be embodied as an
assembly of resources (e.g., physical resources 206), such as
compute resources (e.g., physical compute resources 205-4), storage
resources (e.g., physical storage resources 205-1), accelerator
resources (e.g., physical accelerator resources 205-2), or other
resources (e.g., physical memory resources 205-3) from the same or
different sleds (e.g., the sleds 204-1, 204-2, 204-3, 204-4, etc.)
or racks (e.g., one or more of racks 302-1 through 302-32). Each
managed node 1260 may be established, defined, or "spun up" by the
orchestrator server 1240 at the time a workload is to be assigned
to the managed node 1260 or at any other time, and may exist
regardless of whether any workloads are presently assigned to the
managed node 1260. The system 1210 may be implemented in accordance
with the data centers 100, 300, 400, 1100 described above with
reference to FIGS. 1, 3, 4, and 11. In the illustrative embodiment,
the set of managed nodes 1260 includes managed nodes 1250, 1252,
and 1254. While three managed nodes 1260 are shown in the set, it
should be understood that in other embodiments, the set may include
a different number of managed nodes 1260 (e.g., tens of thousands).
The system 1210 may be located in a data center and provide storage
and compute services (e.g., cloud services) to a client device 1220
that is in communication with the system 1210 through a network
1230. The orchestrator server 1240 may support a cloud operating
environment, such as OpenStack, and the managed nodes 1250 may
execute one or more applications or processes (i.e., workloads),
such as in virtual machines or containers, on behalf of a user of
the client device 1220.
[0047] As discussed in more detail herein, the orchestrator server
1240, in operation, is configured to assign workloads to managed
nodes 1260, receive telemetry data indicative of performance and
conditions from the managed nodes 1260 as the workloads are
performed, identify jobs within the workloads to be accelerated
with one or more accelerator resources 205-2, provision (e.g.,
configure) the accelerator resources 205-2 to accelerate the
identified jobs, and allocate the provisioned accelerator resources
205-2 to the managed nodes 1260 to accelerate the identified jobs.
In the illustrative embodiment, the accelerator resources 205-2
include field programmable gate arrays (FPGAs) and the orchestrator
server provisions the FPGAs by sending bitstreams indicative of
desired configurations of the FPGAs to accelerate particular jobs.
The orchestrator server 1240, in the illustrative embodiment,
determines when the demand for acceleration for a particular job is
likely to occur, based on evaluating the telemetry data and
identifying patterns in the execution of the jobs, and sends the
bitstreams to the FPGAs ahead of time, to provision the FPGAs in
time to accelerate the jobs when the acceleration demand occurs.
Additionally, the orchestrator server may receive resource
allocation objective data indicative of one or more objectives to
be achieved during the execution of the workloads. In the
illustrative embodiment, the objectives pertain to power
consumption, life expectancy, heat production, and/or performance
of the resources allocated to the managed nodes 1260. As the
workloads are executed, the orchestrator server 1240 may
selectively allocate or deallocate the accelerator resources 205-2
to achieve the resource allocation objectives. In the illustrative
embodiment, the achievement of an objective may be measured, equal
to, or otherwise defined as the degree to which a measured value
from one or more managed nodes 1260 satisfies a target value
associated with the objective. For example, in the illustrative
embodiment, increasing the achievement may be performed by
decreasing the error (e.g., difference) between the measured value
(e.g., a time taken to complete a workload or an operation in a
workload) and the target value (e.g., a target time to complete the
workload or operation in the workload). Conversely, decreasing the
achievement may be performed by increasing the error (e.g.,
difference) between the measured value and the target value.
[0048] Referring now to FIG. 13, the orchestrator server 1240 may
be embodied as any type of compute device capable of performing the
functions described herein, including issuing a request to have
cloud services performed, receiving results of the cloud services,
assigning workloads to compute devices, analyzing telemetry data
indicative of performance and conditions (e.g., resource
utilization, one or more temperatures, fan speeds, etc.) as the
workloads are executed, and managing the allocation of resources,
including accelerator resources 205-2, across the managed nodes
1260 as the workloads are executed. For example, the orchestrator
server 1240 may be embodied as a computer, a distributed computing
system, one or more sleds (e.g., the sleds 204-1, 204-2, 204-3,
204-4, etc.), a server (e.g., stand-alone, rack-mounted, blade,
etc.), a multiprocessor system, a network appliance (e.g., physical
or virtual), a desktop computer, a workstation, a laptop computer,
a notebook computer, a processor-based system, or a network
appliance. As shown in FIG. 13, the illustrative orchestrator
server 1240 includes a central processing unit (CPU) 1302, a main
memory 1304, an input/output (I/O) subsystem 1306, communication
circuitry 1308, and one or more data storage devices 1312. Of
course, in other embodiments, the orchestrator server 1240 may
include other or additional components, such as those commonly
found in a computer (e.g., display, peripheral devices, etc.).
Additionally, in some embodiments, one or more of the illustrative
components may be incorporated in, or otherwise form a portion of,
another component. For example, in some embodiments, the main
memory 1304, or portions thereof, may be incorporated in the CPU
1302.
[0049] The CPU 1302 may be embodied as any type of processor
capable of performing the functions described herein. The CPU 1302
may be embodied as a single or multi-core processor(s), a
microcontroller, or other processor or processing/controlling
circuit. In some embodiments, the CPU 1302 may be embodied as,
include, or be coupled to a field programmable gate array (FPGA),
an application specific integrated circuit (ASIC), reconfigurable
hardware or hardware circuitry, or other specialized hardware to
facilitate performance of the functions described herein.
Similarly, the main memory 1304 may be embodied as any type of
volatile (e.g., dynamic random access memory (DRAM), etc.) or
non-volatile memory or data storage capable of performing the
functions described herein. In some embodiments, all or a portion
of the main memory 1304 may be integrated into the CPU 1302. In
operation, the main memory 1304 may store various software and data
used during operation such as telemetry data, resource allocation
objective data, workload labels, workload classifications, job
data, resource allocation data, operating systems, applications,
programs, libraries, and drivers.
[0050] The I/O subsystem 1306 may be embodied as circuitry and/or
components to facilitate input/output operations with the CPU 1302,
the main memory 1304, and other components of the orchestrator
server 1240. For example, the I/O subsystem 1306 may be embodied
as, or otherwise include, memory controller hubs, input/output
control hubs, integrated sensor hubs, firmware devices,
communication links (e.g., point-to-point links, bus links, wires,
cables, light guides, printed circuit board traces, etc.), and/or
other components and subsystems to facilitate the input/output
operations. In some embodiments, the I/O subsystem 1306 may form a
portion of a system-on-a-chip (SoC) and be incorporated, along with
one or more of the CPU 1302, the main memory 1304, and other
components of the orchestrator server 1240, on a single integrated
circuit chip.
[0051] The communication circuitry 1308 may be embodied as any
communication circuit, device, or collection thereof, capable of
enabling communications over the network 1230 between the
orchestrator server 1240 and another compute device (e.g., the
client device 1220, and/or the managed nodes 1260). The
communication circuitry 1308 may be configured to use any one or
more communication technology (e.g., wired or wireless
communications) and associated protocols (e.g., Ethernet,
Bluetooth.RTM., Wi-Fi.RTM., WiMAX, etc.) to effect such
communication.
[0052] The illustrative communication circuitry 1308 includes a
network interface controller (NIC) 1310, which may also be referred
to as a host fabric interface (HFI). The NIC 1310 may be embodied
as one or more add-in-boards, daughtercards, network interface
cards, controller chips, chipsets, or other devices that may be
used by the orchestrator server 1240 to connect with another
compute device (e.g., the client device 1220 and/or the managed
nodes 1260). In some embodiments, the NIC 1310 may be embodied as
part of a system-on-a-chip (SoC) that includes one or more
processors, or included on a multichip package that also contains
one or more processors. In some embodiments, the NIC 1310 may
include a local processor (not shown) and/or a local memory (not
shown) that are both local to the NIC 1310. In such embodiments,
the local processor of the NIC 1310 may be capable of performing
one or more of the functions of the CPU 1302 described herein.
Additionally or alternatively, in such embodiments, the local
memory of the NIC 1310 may be integrated into one or more
components of the orchestrator server 1240 at the board level,
socket level, chip level, and/or other levels.
[0053] The one or more illustrative data storage devices 1312, may
be embodied as any type of devices configured for short-term or
long-term storage of data such as, for example, memory devices and
circuits, memory cards, hard disk drives, solid-state drives, or
other data storage devices. Each data storage device 1312 may
include a system partition that stores data and firmware code for
the data storage device 1312. Each data storage device 1312 may
also include an operating system partition that stores data files
and executables for an operating system.
[0054] Additionally or alternatively, the orchestrator server 1240
may include one or more peripheral devices 1314. Such peripheral
devices 1314 may include any type of peripheral device commonly
found in a compute device such as a display, speakers, a mouse, a
keyboard, and/or other input/output devices, interface devices,
and/or other peripheral devices.
[0055] The client device 1220 and the managed nodes 1260 may have
components similar to those described in FIG. 13. The description
of those components of the orchestrator server 1240 is equally
applicable to the description of components of the client device
1220 and the managed nodes 1260 and is not repeated herein for
clarity of the description. Further, it should be appreciated that
any of the client device 1220 and the managed nodes 1260 may
include other components, sub-components, and devices commonly
found in a computing device, which are not discussed above in
reference to the orchestrator server 1240 and not discussed herein
for clarity of the description. As discussed above, each managed
node 1260 may include resources distributed across multiple sleds
and in such embodiments, the CPU 1302, memory 1304, and/or
communication circuitry 1308 may include portions thereof located
on the same sled or different sled.
[0056] As described above, the client device 1220, the orchestrator
server 1240, and the managed nodes 1260 are illustratively in
communication via the network 1230, which may be embodied as any
type of wired or wireless communication network, including global
networks (e.g., the Internet), local area networks (LANs) or wide
area networks (WANs), cellular networks (e.g., Global System for
Mobile Communications (GSM), 3G, Long Term Evolution (LTE),
Worldwide Interoperability for Microwave Access (WiMAX), etc.),
digital subscriber line (DSL) networks, cable networks (e.g.,
coaxial networks, fiber networks, etc.), or any combination
thereof.
[0057] Referring now to FIG. 14, in the illustrative embodiment,
the orchestrator server 1240 may establish an environment 1400
during operation. The illustrative environment 1400 includes a
network communicator 1420, a telemetry monitor 1430, and a resource
manager 1440. Each of the components of the environment 1400 may be
embodied as hardware, firmware, software, or a combination thereof.
As such, in some embodiments, one or more of the components of the
environment 1400 may be embodied as circuitry or a collection of
electrical devices (e.g., network communicator circuitry 1420,
telemetry monitor circuitry 1430, resource manager circuitry 1440,
etc.). It should be appreciated that, in such embodiments, one or
more of the network communicator circuitry 1420, telemetry monitor
circuitry 1430, or resource manager circuitry 1440 may form a
portion of one or more of the CPU 1302, the main memory 1304, the
I/O subsystem 1306, and/or other components of the orchestrator
server 1240. In the illustrative embodiment, the environment 1400
includes telemetry data 1402 which may be embodied as data
indicative of the performance and conditions (e.g., resource
utilization, operating frequencies, power usage, one or more
temperatures, fan speeds, etc.) of resources allocated to each
managed node 1260 and individual jobs (e.g., set of functions) of
the workloads that are performed as the managed nodes 1260 execute
the workloads assigned to them. Additionally, the illustrative
environment 1400 includes resource allocation objective data 1404
indicative of user-defined thresholds or goals ("objectives") to be
satisfied during the execution of the workloads. In the
illustrative embodiment, the objectives pertain to power
consumption, life expectancy, heat production, and performance of
the resources allocated to the managed nodes 1260. Further, the
illustrative environment 1400 includes workload labels 1406 which
may be embodied as any identifiers (e.g., process numbers,
executable file names, alphanumeric tags, etc.) that uniquely
identify each workload executed by the managed nodes 1260.
[0058] Additionally, the illustrative environment 1400 includes
workload classifications 1408 which may be embodied as any data
indicative of the general resource utilization tendencies of each
workload (e.g., processor intensive, memory intensive, network
bandwidth intensive, etc.). Further, the illustrative environment
1400 includes job data 1410 indicative of jobs (e.g., sets of
functions) within each workload that may be accelerated. In the
illustrative embodiment, the job data 1410 is embodied as a queue
of jobs to be processed, an indication of the types of functions
within the job (e.g., compression, encryption, matrix operations,
etc.), information about the format and size of input data used by
the job (e.g., number of bytes, whether the input data is formatted
as a matrix or otherwise, an encoding scheme for the input data,
etc.), a globally unique identifier (GUID) associated with each
job, counters indicative of how many times a particular job has
been in the queue within a predefined time frame for each workload
and across all workloads executed in the data center 1100, the
average amount of time each job resides in the queue, and/or other
characteristics of the jobs. Additionally, the illustrative
embodiment 1400 includes resource allocation data 1412 indicative
of the resources, including accelerator resources 205-2, within the
data center 1100 that have been allocated to each managed node 1260
at any given time.
[0059] In the illustrative environment 1400, the network
communicator 1420, which may be embodied as hardware, firmware,
software, virtualized hardware, emulated architecture, and/or a
combination thereof as discussed above, is configured to facilitate
inbound and outbound network communications (e.g., network traffic,
network packets, network flows, etc.) to and from the orchestrator
server 1240, respectively. To do so, the network communicator 1420
is configured to receive and process data packets from one system
or computing device (e.g., the client device 1220) and to prepare
and send data packets to another computing device or system (e.g.,
the managed nodes 1260). Accordingly, in some embodiments, at least
a portion of the functionality of the network communicator 1420 may
be performed by the communication circuitry 1308, and, in the
illustrative embodiment, by the NIC 1310.
[0060] The telemetry monitor 1430, which may be embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof as discussed above, is
configured to collect the telemetry data 1402 from the managed
nodes 1260 as the managed nodes 1260 execute the workloads assigned
to them. The telemetry monitor 1430 may actively poll each of the
managed nodes 1260 for updated telemetry data 1402 on an ongoing
basis or may passively receive telemetry data 1402 from the managed
nodes 1260, such as by listening on a particular network port for
updated telemetry data 1402. The telemetry monitor 1430 may further
parse and categorize the telemetry data 1402, such as by separating
the telemetry data 1402 into an individual file or data set for
each managed node 1260.
[0061] The resource manager 1440, which may be embodied as
hardware, firmware, software, virtualized hardware, emulated
architecture, and/or a combination thereof, is configured to assign
workloads to managed nodes, identify jobs within the workloads to
accelerate, predict when acceleration demand will occur within the
workloads, provision (e.g., configure) accelerator resources 205-2
in advance of the predicted acceleration demand, and adjust the
allocation of accelerator resources 205-2 to and from the managed
nodes 1260 on an ongoing basis to improve the efficiency of
workload execution and/or satisfy other resource allocation
objectives (e.g. from the resource allocation objective data
1404).
[0062] To do so, the resource manager 1440 includes a workload
labeler 1442, a workload classifier 1444, a workload behavior
predictor 1446, an acceleration manager 1448, and a multi-objective
analyzer 1450. The workload labeler 1442, in the illustrative
embodiment, is configured to assign a workload label 1406 to each
workload presently performed or scheduled to be performed by the
managed nodes 1260. The workload labeler 1442 may generate the
workload label 1406 as a function of an executable name of the
workload, a hash of all or a portion of the code of the workload,
or based on any other method to uniquely identify each workload.
The workload classifier 1444, in the illustrative embodiment, is
configured to categorize each labeled workload based on the average
resource utilization of each workload (e.g., generally utilizes 65%
of processor capacity, generally utilizes 40% of memory capacity,
etc.).
[0063] The workload behavior predictor 1446, in the illustrative
embodiment, is configured to analyze the telemetry data 1402 to
identify different phases of resource utilization within the
telemetry data 1402 for each workload. Each resource utilization
phase may be embodied as a period of time in which the resource
utilization of one or more resources allocated to a managed node
1260 satisfies a predefined threshold. For example, a utilization
of at least 85% of the allocated processor capacity may be
indicative of a high processor utilization phase, and a utilization
of at least 85% of the allocated memory capacity may be indicative
of a high memory utilization phase. In the illustrative embodiment,
the workload behavior predictor 1446 is further to identify
patterns in the resource utilization phases of the workloads (e.g.,
a high processor utilization phase, followed by a high memory
utilization phase, followed by a phase of low resource utilization,
which is then followed by the high processor utilization phase
again). The workload behavior predictor 1446 may be configured to
utilize the identifications of the resource utilization phase
patterns, determine a present resource utilization phase of a given
workload, predict the next resource utilization phase based on the
patterns, and determine an amount of remaining time until the
workload transitions to the next resource utilization phase.
[0064] The acceleration manager 1448, in the illustrative
embodiment, is configured to identify, generate, from the telemetry
data 1402, the job data 1410, identify jobs within the workloads to
be accelerated, based on their types, residency time in the job
queue, how often the jobs are executed, and other factors,
coordinate selecting and provisioning accelerator resources 205-2,
such as FPGAs, available within the data center 1100, and manage
the timing of the allocation and/or deallocation of the accelerator
resources 205-2 to coincide with predicting times when the jobs to
be accelerated are likely to be initiated (e.g., called) by the
workloads.
[0065] The multi-objective analyzer 1450, in the illustrative
embodiment, is configured to whether an efficiency objective and/or
other resource allocation objective data 1404 is being met during
the execution of workloads, and, determine adjustments to the
allocation of resources among the managed nodes 1260 to enable the
one more objectives to be satisfied. As such, with regard to the
allocation of accelerator resources 205-2, the multi-objective
analyzer 1450 coordinates with the acceleration manager 1448 to
determine which accelerator resources 205-2 to allocate to which
managed nodes 1260 and at what time. In the illustrative
embodiment, the multi-objective analyzer 1450 may include a model
of the data center 1100 that simulates the expected effects,
including power consumption, heat generation, changes to compute
capacity, and other factors, in response to various adjustments to
the allocations of resources among the managed nodes 1260 and/or
the settings of components (e.g., increasing or decreasing clock
speeds, enabling or disabling support for extended instruction
sets, etc.) within the resources. To do so, in the illustrative
embodiment, the multi-objective analyzer 1450 includes a resource
allocator 1452 and a resource settings adjuster 1454. The resource
allocator 1452, in the illustrative embodiment, is configured to
issue instructions to the managed nodes 1260 to allocate or
deallocate resources as determined by the multi-objective analyzer
1450 and the acceleration manager 1448, and to update the resource
allocation data 1412 to indicate the present state of allocation of
the resources among the managed nodes 1260. Similarly, the resource
settings adjuster 1454, in the illustrative embodiment, is
configured issue instructions to one or more of the managed nodes
1260 to adjust settings of resources allocated to the managed nodes
1260, such as by adjusting a firmware setting to increase or
decrease a clock speed of a processor, increasing or decreasing
power utilization settings, and/or other settings that affect the
operation of the resources.
[0066] It should be appreciated that each of the workload labeler
1442, the workload classifier 1444, the workload behavior predictor
1446, the acceleration manager 1448, the multi-objective analyzer
1450, the resource allocator 1452, and the resource settings
adjuster 1454 may be separately embodied as hardware, firmware,
software, virtualized hardware, emulated architecture, and/or a
combination thereof. For example, the workload labeler 1442 may be
embodied as a hardware component, while the workload classifier
1444, the workload behavior predictor 1446, the acceleration
manager 1448, the multi-objective analyzer 1450, the resource
allocator 1452, and the resource settings adjuster 1454 are
embodied as virtualized hardware components or as some other
combination of hardware, firmware, software, virtualized hardware,
emulated architecture, and/or a combination thereof.
[0067] Referring now to FIG. 15, in use, the orchestrator server
1240 may execute a method 1500 for managing the allocation of
accelerator resources 205-2 among the managed nodes 1260 as the
managed nodes 1260 execute workloads. The method 1500 begins with
block 1502, in which the orchestrator server 1240 determines
whether to manage the allocation of resources among the managed
nodes 1260. In the illustrative embodiment, the orchestrator server
1240 determines to manage the allocation of resources if the
orchestrator server 1240 is powered on, in communication with the
managed nodes 1260, and has received at least one request from the
client device 1220 to provide cloud services (i.e., to perform one
or more workloads). In other embodiments, the orchestrator server
1240 may determine whether to manage the allocation of resources
based on other factors. Regardless, in response to a determination
to manage the allocation of resources, in the illustrative
embodiment, the method 1500 advances to block 1504 in which the
orchestrator server 1240 may obtain resource allocation objective
data (e.g., the resource allocation objective data 1404). In doing
so, the orchestrator server 1240 may obtain the resource allocation
objective data 1404 from a user (e.g., an administrator) through a
graphical user interface (not shown), from a configuration file, or
from another source. The orchestrator server 1240, in the
illustrative embodiment, may obtain performance objective data,
indicative of a target speed at which workloads are to be executed
(e.g., a target time period in which to complete execution of a
workload, a target number of operations per second, etc.), as
indicated in block 1506. In receiving the resource allocation
objective data 1404, the orchestrator server 1240 may receive power
consumption objective data indicative of a target power usage or
threshold amount of power usage of the resource allocated to each
managed node 1260 as they execute the workloads, as indicated in
block 1508. Additionally or alternatively, the orchestrator server
1240 may receive reliability objective data indicative of a target
life cycle of one or more resources (e.g., a target life cycle of a
data storage device, a target life cycle of a cooling fan, etc.),
as indicated in block 1510. As indicated in block 1512, the
orchestrator server 1240 may also receive thermal objective data
indicative of one or more target temperatures of one or more
resources (e.g., one or more CPUs 1302, etc.).
[0068] In block 1514, in the illustrative embodiment, the
orchestrator server 1240 allocates resources to the managed nodes
1260. Initially, the orchestrator server 1240 has not received any
telemetry data 1402 to inform a decision as to which resources to
allocate to the various managed nodes 1260. As such, as indicated
in block 1516, the orchestrator server 1240 may initially allocate
no accelerator resources 205-2 to any of the managed nodes 1260.
Alternatively, as indicated in block 1518, the orchestrator server
1240 may assign accelerator resources 205-2 among the managed nodes
1260 according to a default scheme (e.g., dividing the accelerator
resources 205-2 among the managed nodes 1260 evenly, allocating a
predefined number of accelerator resources 205-2 to each managed
node 1260 as the managed nodes 1260 are defined until no more
available accelerator resources 205-2 are available, etc.). In
doing so, the orchestrator server 1240 may defer allocating any
FPGAs to the managed nodes 1260 until after the workloads have been
assigned and the FPGAs have been provisioned (e.g., configured) to
perform one or more jobs to be accelerated, as described in more
detail herein. In block 1520, the orchestrator server 1240 assigns
workloads to the managed nodes 1260 for execution and, as indicated
in block 1522, begins receiving the telemetry data 1402 as the
workloads are executed by the managed nodes 1260. Subsequently, the
method 1500 advances to block 1524 of FIG. 16 in which the
orchestrator server 1240 determines, from the telemetry data 1402,
predicted demand for acceleration (e.g., for which accelerator
resources 205-2 should be allocated) as explained in more detail
herein.
[0069] Referring now to FIG. 16, in determining, from the telemetry
data 1402, the predicted demand for acceleration, the orchestrator
server 1240 may identify jobs within the assigned workloads for
acceleration, as indicated in block 1526. As indicated in block
1528, in the illustrative embodiment, the orchestrator server 1240
may analyze a job queue (e.g., the job data 1410) to identify jobs
within the assigned workloads for acceleration. In doing so, the
orchestrator server 1240 may determine an average amount of time
each job resides in the queue (e.g., before being completed), as
indicated in block 1530. As indicated in block 1532, the
orchestrator serer 1240 may apply a smoothing algorithm such as an
exponential smoothing algorithm, to one or more times indicated by
the job queue to determine the average amount of time each job
resides in the job queue. As indicated in block 1534, the
orchestrator server 1240 may determine local counts and global
counts of jobs executed, and compare the local and global counts to
one or more threshold count values. For example, the orchestrator
server 1240 may maintain a count of how many times each job has
been performed for each workload (e.g., a local count) as well as a
count of how many times each job, regardless of the particular
workload or managed node 1260 associated with it, has been
performed. If either of the counts satisfies (e.g., is equal to or
exceeds) a predefined threshold value, the orchestrator server 1240
may identify the corresponding job as one that should be
accelerated (e.g., executed with one or more accelerator resources
205-2).
[0070] Still referring to FIG. 16, in determining the predicted
demand for acceleration, the orchestrator server 1240 may
additionally identify characteristics of the jobs being executed,
as indicated in block 1536. In doing so, the orchestrator server
1240 may determine whether each job is amenable to acceleration
(e.g., whether an accelerator resource could execute the job faster
or more efficiently than a general purpose processor 1302). In
doing so, as indicated in block 1540, the orchestrator server 1240
may determine the type of each job, such as by analyzing and
classifying the types of functions as indicative of certain types
of operations (e.g., compression operations, encryption operations,
etc.). As indicated in block 1542, the orchestrator server 1240 may
determine characteristics of the input data used by the jobs, such
as whether the input data is formatted as a matrix of values or in
another format, the size (e.g., in bytes) of the input data, and/or
characteristics of the input data. As described above, the analysis
may be performed on the job data 1410 which, in the illustrative
embodiment, is generated from the telemetry data 1402 reported by
the managed nodes 1260. As such, in the illustrative embodiment,
the managed nodes 1260 may be configured to provide information
indicative of the types of functions within each job and the input
data characteristics for each job. As indicated in block 1544, in
the process of identifying the characteristics of the jobs, the
orchestrator server 1240 may assign a globally unique identifier
(e.g., a number, tag, alphanumeric sequence, or other identifier
that is unique) to each job. The globally unique identifier may be
generated from an identifier for each job reported from each
managed node 1260 in the managed node's 1260 corresponding
telemetry data 1402, such as by appending a hash of the workload
label and a unique identifier of the managed node 1260 to the
identifier of the corresponding job indicated in the telemetry data
1402 from the managed node 1260. In block 1546, the orchestrator
server 1240 may determine a predicted time of the demand for
acceleration (e.g., when the demand will likely occur). As
indicated in block 1548, the orchestrator server 1240 may determine
the predicted time of the demand by analyzing a pattern of the job
executions for each workload (e.g., job A resides in the job queue
for 10 seconds, followed by job B, which resides in the job queue
for 15 seconds, followed again by job A). Afterwards, the method
1500 advances to block 1550 of FIG. 17 in which the orchestrator
server 1240 provisions, prior to the predicted demand for
acceleration, one or more accelerator resources 205-2 to accelerate
the jobs within the workloads.
[0071] Referring now to FIG. 17, in provisioning the accelerator
resources 205-2, the orchestrator server 1240, in the illustrative
embodiment, selects one or more field programmable gate arrays to
provision, as indicated in block 1552. As indicated in block 1554,
the orchestrator server 1240, in the illustrative embodiment,
prefers FPGAs that are already configured (e.g., provisioned) to
perform a given job that is to be accelerated in the future. By
preferring (e.g., selecting over other FPGAs) FPGAs that are
already provisioned to perform the job to be accelerated, the
orchestrator server 1240 may save time that would otherwise be
consumed to provide a bitstream indicative of the desired
configuration to the FPGA and wait for the FPGA to configure its
field programmable gates according to the desired configuration.
The orchestrator server 1240 may store, in the resource allocation
data 1412, information indicative of which FPGAs have been
provisioned to perform which jobs. In block 1556, the orchestrator
server 1240, in the illustrative embodiment, determines the number
of FPGAs to provision, such as by counting the number of jobs that
have been identified for acceleration, determining the number of
available FPGAs, and determining to use one available FPGA for each
job or up to the number of available FPGAs, if the number of
available FPGAs is less than the number of jobs to accelerate. In
block 1558, the orchestrator server 1240 may select FPGAs on sleds
(e.g., accelerator sled 1130) that are different form the sleds on
which the workloads are executed by general purpose processors
(e.g., compute sled 204-4). As indicated in block 1560, the
orchestrator server 1240 may select FPGAs as a function of a target
heat generation, a target power consumption, and/or a target
economic cost. For example, some FPGAs may be more efficient in
terms of heat generation and/or power consumption than other FPGAs,
because they are composed of smaller or otherwise more efficient
components. As such, the cost of cooling and powering less
efficient FPGAs may be greater than cooling and powering other
FPGAs. In block 1562, the orchestrator server 1240 may determine a
configuration time for each FPGA (e.g., the amount of time that
will elapse to configure the FPGA to perform a job). Initially, the
orchestrator server 1240 may not have access to data indicative of
the amount of time required to provision a particular FPGA and may
instead use a default estimated time (e.g., two minutes). If and
when the orchestrator server 1240 does provision the FPGA, the
orchestrator server 1240 may measure the actual amount of time that
elapses to provision the FPGA and refer to that measured time in
later determinations.
[0072] In block 1564, the orchestrator server 1240 provides (e.g.,
sends) a bitstream indicative of a desired configuration of each
FPGA to each FPGA to be provisioned. The bitstream may include a
portion specific to the architecture of the particular FPGA (e.g.,
to initialize the FPGA for configuration) and another portion
indicative of the desired configuration of the gates within the
FPGA to perform the corresponding job to be accelerated. In
providing the bitstreams, in the illustrative embodiment and as
indicated in block 1566, the orchestrator server 1240 provides the
bitstreams in advance of the predicted time (e.g., the time
predicted in block 1546 of FIG. 16) that the job to be accelerated
is scheduled to be executed (e.g., in advance of the time of the
predicted demand) by the determined configuration time for the
corresponding FPGA. For example, if the configuration time for the
FPGA is two minutes, in the illustrative embodiment, the
orchestrator server 1240 sends the bitstream to the FPGA at least
two minutes before the corresponding job is to be executed (e.g.,
two minutes before the job enters the job queue).
[0073] Afterwards, the method 1500 advances to block 1568 in which
the orchestrator server 1240 allocates the accelerator resources
205-2 to the managed nodes 1260 to accelerate execution of the
workloads (e.g., the workload jobs that were identified for
acceleration in block 1526 of FIG. 16). In block 1570, the
orchestrator server 1240, in the illustrative embodiment, allocates
the provisioned FPGAs from block 1550 to the managed nodes 1260
associated with the jobs identified for acceleration. The
orchestrator server 1240 may do so by providing each managed node
1260 with address information for the corresponding FPGAs to enable
the managed nodes 1260 to communicate with the FPGAs. As indicated
in block 1572, the orchestrator server 1240 may allocated other
accelerator resources 205-2 (e.g., graphics accelerators, etc.) to
the managed nodes 1260, such as if one or more jobs are not
suitable for acceleration by an FPGA, as determined in blocks 1538
through 1542 of FIG. 16, or if the set of available FPGAs in the
data center 1100 has been depleted. In block 1574, the orchestrator
server 1240 may deallocate one more accelerator resources 205-2
from one or more managed nodes 1260 (e.g., if the corresponding
jobs have completed), thereby replenishing the set of available
accelerator resources 205-2. As indicated in block 1576, in
allocating and/or deallocating the accelerator resources 205-2, the
orchestrator server 1240, in the illustrative embodiment, does so
to satisfy the one or more resource allocation objectives (e.g.,
objectives in the resource allocation objective data 1404). For
example, if accelerating a particular job would increase
performance beyond a target resource allocation objective (e.g. a
number of operations per second) and would cause heat generation in
excess of a target temperature in one or more areas of the data
center 1100, the orchestrator server 1240 may determine not to
allocate an accelerator resource 205-2 to that job. In some
embodiment, the orchestrator server 1240 may determine whether to
ultimately allocate an accelerator resource 205-2 to accelerate a
particular job in view of the resource allocation objectives prior
to the provisioning operations in block 1550. Subsequently, the
method 1500 returns to block 1522 of FIG. 15 in which the
orchestrator server 1240 continues collecting telemetry data 1402
as the workloads are executed.
EXAMPLES
[0074] Illustrative examples of the technologies disclosed herein
are provided below. An embodiment of the technologies may include
any one or more, and any combination of, the examples described
below.
[0075] Example 1 includes an orchestrator server to dynamically
manage the allocation of accelerator resources, the orchestrator
server comprising one or more processors; one or more memory
devices having stored therein a plurality of instructions that,
when executed by the one or more processors, cause the orchestrator
server to assign a workload to a managed node for execution;
determine a predicted demand for one or more accelerator resources
to accelerate the execution of one or more jobs within the
workload; provision, prior to the predicted demand, one or more
accelerator resources to accelerate the one or more jobs; and
allocate the one or more provisioned accelerator resources to the
managed node to accelerate the execution of the one or more
jobs.
[0076] Example 2 includes the subject matter of Example 1, and
wherein to determine the predicted demand comprises to determine a
demand for one or more field programmable gate arrays (FPGAs).
[0077] Example 3 includes the subject matter of any of Examples 1
and 2, and wherein to provision the one or more accelerator
resources comprises to provide, to the one or more FPGAs, a bit
stream indicative of a configuration of each FPGA to accelerate
execution of the one or more jobs.
[0078] Example 4 includes the subject matter of any of Examples
1-3, and wherein to determine the predicted demand comprises to
determine the number of accelerator resources to allocate to
satisfy the predicted demand.
[0079] Example 5 includes the subject matter of any of Examples
1-4, and wherein to provision the one or more accelerator resources
comprises to provision one or more accelerator resources located on
one or more sleds that are different than a sled on which the
workload is presently executed.
[0080] Example 6 includes the subject matter of any of Examples
1-5, and wherein the plurality of instructions, when executed,
further cause the orchestrator server to determine a configuration
time period to provision each of the one or more accelerator
resources; and determine a predicted time of the predicted demand;
and wherein to provision the one or more accelerator resources
comprises to begin configuration of the one or more accelerator
resources for accelerated execution of the one or more jobs at a
time that is earlier than the predicted time by at least the
configuration time period.
[0081] Example 7 includes the subject matter of any of Examples
1-6, and wherein the plurality of instructions, when executed,
further cause the orchestrator server to identify one or more jobs
within the workload to be accelerated with one or more field
programmable gate arrays (FPGAs); associate each identified job
with a globally unique identifier indicative of one or more of a
specific interface of the job or a definition of the job.
[0082] Example 8 includes the subject matter of any of Examples
1-7, and wherein to associate each identified job with a globally
unique identifier comprises to associate each identified job with a
globally unique identifier indicative of one or more of a size of
an input or a format of an input to the job.
[0083] Example 9 includes the subject matter of any of Examples
1-8, and wherein the managed node is one of a plurality of managed
nodes and the workload is one of a plurality of workloads executed
by the managed nodes and the plurality of instructions, when
executed, further cause the orchestrator server to determine, for
each workload, a local count indicative of a number of times a job
is executed in each workload; determine a global count indicative
of a number of times a job is executed by all of the managed nodes;
determine whether one or more of the local count or the global
count satisfies a threshold count value; and identify, in response
to a determination that one or more of the local count or the
global count satisfies the threshold count value, the associated
job as a job to be accelerated.
[0084] Example 10 includes the subject matter of any of Examples
1-9, and wherein the plurality of instructions, when executed,
further cause the orchestrator server to identify, from a plurality
of accelerator resources, the one or more accelerator resources to
accelerate the one or more jobs.
[0085] Example 11 includes the subject matter of any of Examples
1-10, and wherein to identify the one or more accelerator resources
comprises to determine whether one or more of the accelerator
resources is already configured to perform one or more of the jobs;
and select, in response to a determination that one or more the
accelerator resources is already configured to perform one or more
of the jobs, the one or more already-configured accelerator
resources for acceleration of the one or more jobs.
[0086] Example 12 includes the subject matter of any of Examples
1-11, and wherein to identify the one or more accelerator resources
comprises to select the one or more accelerator resources as a
function of one or more of a target heat generation, a target power
usage, or a target economic cost of utilization of the one or more
accelerator resources.
[0087] Example 13 includes the subject matter of any of Examples
1-12, and wherein the managed node is one of a plurality of managed
nodes and the workload is one of a plurality of workloads executed
by the managed nodes, and wherein to determine the demand comprises
to establish a job queue indicative of all jobs for all of the
workloads to be performed; determine an average time period in
which each job resides in the job queue; and determine the demand
for each job as a function of the average time period for each
job.
[0088] Example 14 includes the subject matter of any of Examples
1-13, and wherein to determine the demand for each job further
comprises to apply an exponential averaging algorithm to the time
period in which each job resides in the job queue.
[0089] Example 15 includes a method for dynamically managing the
allocation of accelerator resources, the method comprising
assigning, by an orchestrator server, a workload to a managed node
for execution; determining, by the orchestrator server, a predicted
demand for one or more accelerator resources to accelerate the
execution of one or more jobs within the workload; provisioning, by
the orchestrator server and prior to the predicted demand, one or
more accelerator resources to accelerate the one or more jobs; and
allocating, by the orchestrator server, the one or more provisioned
accelerator resources to the managed node to accelerate the
execution of the one or more jobs.
[0090] Example 16 includes the subject matter of Example 15, and
wherein determining the predicted demand comprises determining a
demand for one or more field programmable gate arrays (FPGAs).
[0091] Example 17 includes the subject matter of any of Examples 15
and 16, and wherein provisioning the one or more accelerator
resources comprises providing, to the one or more FPGAs, a bit
stream indicative of a configuration of each FPGA to accelerate
execution of the one or more jobs.
[0092] Example 18 includes the subject matter of any of Examples
15-17, and wherein determining the predicted demand comprises
determining the number of accelerator resources to allocate to
satisfy the predicted demand.
[0093] Example 19 includes the subject matter of any of Examples
15-18, and wherein provisioning the one or more accelerator
resources comprises provisioning one or more accelerator resources
located on one or more sleds that are different than a sled on
which the workload is presently executed.
[0094] Example 20 includes the subject matter of any of Examples
15-19, and further including determining, by the orchestrator
server, a configuration time period to provision each of the one or
more accelerator resources; and determining, by the orchestrator
server, a predicted time of the predicted demand; and wherein
provisioning the one or more accelerator resources comprises
beginning configuration of the one or more accelerator resources
for accelerated execution of the one or more jobs at a time that is
earlier than the predicted time by at least the configuration time
period.
[0095] Example 21 includes the subject matter of any of Examples
15-20, and further including identifying, by the orchestrator
server, one or more jobs within the workload to be accelerated with
one or more field programmable gate arrays (FPGAs); and
associating, by the orchestrator server, each identified job with a
globally unique identifier indicative of one or more of a specific
interface of the job or a definition of the job.
[0096] Example 22 includes the subject matter of any of Examples
15-21, and wherein associating each identified job with a globally
unique identifier comprises associating each identified job with a
globally unique identifier indicative of one or more of a size of
an input or a format of an input to the job.
[0097] Example 23 includes the subject matter of any of Examples
15-22, and wherein the managed node is one of a plurality of
managed nodes and the workload is one of a plurality of workloads
executed by the managed nodes, the method further comprising
determining, by the orchestrator server and for each workload, a
local count indicative of a number of times a job is executed in
each workload; determining, by the orchestrator server, a global
count indicative of a number of times a job is executed by all of
the managed nodes; determining, by the orchestrator server, whether
one or more of the local count or the global count satisfies a
threshold count value; and identifying, by the orchestrator server
and in response to a determination that one or more of the local
count or the global count satisfies the threshold count value, the
associated job as a job to be accelerated.
[0098] Example 24 includes the subject matter of any of Examples
15-23, and further including identifying, by the orchestrator
server and from a plurality of accelerator resources, the one or
more accelerator resources to accelerate the one or more jobs.
[0099] Example 25 includes the subject matter of any of Examples
15-24, and wherein identifying the one or more accelerator
resources comprises determining whether one or more of the
accelerator resources is already configured to perform one or more
of the jobs, the method further comprising selecting, by the
orchestrator server in response to a determination that one or more
the accelerator resources is already configured to perform one or
more of the jobs, the one or more already-configured accelerator
resources for acceleration of the one or more jobs.
[0100] Example 26 includes the subject matter of any of Examples
15-25, and wherein identifying the one or more accelerator
resources comprises selecting the one or more accelerator resources
as a function of one or more of a target heat generation, a target
power usage, or a target economic cost of utilization of the one or
more accelerator resources.
[0101] Example 27 includes the subject matter of any of Examples
15-26, and wherein the managed node is one of a plurality of
managed nodes and the workload is one of a plurality of workloads
executed by the managed nodes, and wherein determining the demand
comprises establishing a job queue indicative of all jobs for all
of the workloads to be performed; determining an average time
period in which each job resides in the job queue; and determining
the demand for each job as a function of the average time period
for each job.
[0102] Example 28 includes the subject matter of any of Examples
15-27, and wherein determining the demand for each job further
comprises applying an exponential averaging algorithm to the time
period in which each job resides in the job queue.
[0103] Example 29 includes an orchestrator server comprising means
for performing the method of any of Examples 15-28.
[0104] Example 30 includes an orchestrator server to dynamically
manage the allocation of accelerator resources, the orchestrator
server comprising one or more processors; one or more memory
devices having stored therein a plurality of instructions that,
when executed by the one or more processors, cause the orchestrator
server to perform the method of any of Examples 15-28.
[0105] Example 31 includes one or more machine-readable storage
media comprising a plurality of instructions stored thereon that,
in response to being executed, cause an orchestrator server to
perform the method of any of Examples 15-28.
[0106] Example 32 includes an orchestrator server to dynamically
manage the allocation of accelerator resources, the orchestrator
server comprising resource manager circuitry to assign a workload
to a managed node for execution, determine a predicted demand for
one or more accelerator resources to accelerate the execution of
one or more jobs within the workload, provision, prior to the
predicted demand, one or more accelerator resources to accelerate
the one or more jobs, and allocate the one or more provisioned
accelerator resources to the managed node to accelerate the
execution of the one or more jobs.
[0107] Example 33 includes the subject matter of Example 32, and
wherein to determine the predicted demand comprises to determine a
demand for one or more field programmable gate arrays (FPGAs).
[0108] Example 34 includes the subject matter of any of Examples 32
and 33, and wherein to provision the one or more accelerator
resources comprises to provide, to the one or more FPGAs, a bit
stream indicative of a configuration of each FPGA to accelerate
execution of the one or more jobs.
[0109] Example 35 includes the subject matter of any of Examples
32-34, and wherein to determine the predicted demand comprises to
determine the number of accelerator resources to allocate to
satisfy the predicted demand.
[0110] Example 36 includes the subject matter of any of Examples
32-35, and wherein to provision the one or more accelerator
resources comprises to provision one or more accelerator resources
located on one or more sleds that are different than a sled on
which the workload is presently executed.
[0111] Example 37 includes the subject matter of any of Examples
32-36, and wherein the resource manager circuitry is further to
determine a configuration time period to provision each of the one
or more accelerator resources; and determine a predicted time of
the predicted demand; and wherein to provision the one or more
accelerator resources comprises to begin configuration of the one
or more accelerator resources for accelerated execution of the one
or more jobs at a time that is earlier than the predicted time by
at least the configuration time period.
[0112] Example 38 includes the subject matter of any of Examples
32-37, and wherein resource manager circuitry is further to
identify one or more jobs within the workload to be accelerated
with one or more field programmable gate arrays (FPGAs); associate
each identified job with a globally unique identifier indicative of
one or more of a specific interface of the job or a definition of
the job.
[0113] Example 39 includes the subject matter of any of Examples
32-38, and wherein to associate each identified job with a globally
unique identifier comprises to associate each identified job with a
globally unique identifier indicative of one or more of a size of
an input or a format of an input to the job.
[0114] Example 40 includes the subject matter of any of Examples
32-39, and wherein the managed node is one of a plurality of
managed nodes and the workload is one of a plurality of workloads
executed by the managed nodes and the resource manager circuitry is
further to determine, for each workload, a local count indicative
of a number of times a job is executed in each workload; determine
a global count indicative of a number of times a job is executed by
all of the managed nodes; determine whether one or more of the
local count or the global count satisfies a threshold count value;
and identify, in response to a determination that one or more of
the local count or the global count satisfies the threshold count
value, the associated job as a job to be accelerated.
[0115] Example 41 includes the subject matter of any of Examples
32-40, and wherein the resource manager circuitry is further to
identify, from a plurality of accelerator resources, the one or
more accelerator resources to accelerate the one or more jobs.
[0116] Example 42 includes the subject matter of any of Examples
32-41, and wherein to identify the one or more accelerator
resources comprises to determine whether one or more of the
accelerator resources is already configured to perform one or more
of the jobs; and select, in response to a determination that one or
more the accelerator resources is already configured to perform one
or more of the jobs, the one or more already-configured accelerator
resources for acceleration of the one or more jobs.
[0117] Example 43 includes the subject matter of any of Examples
32-42, and wherein to identify the one or more accelerator
resources comprises to select the one or more accelerator resources
as a function of one or more of a target heat generation, a target
power usage, or a target economic cost of utilization of the one or
more accelerator resources.
[0118] Example 44 includes the subject matter of any of Examples
32-43, and wherein the managed node is one of a plurality of
managed nodes and the workload is one of a plurality of workloads
executed by the managed nodes, and wherein to determine the demand
comprises to establish a job queue indicative of all jobs for all
of the workloads to be performed; determine an average time period
in which each job resides in the job queue; and determine the
demand for each job as a function of the average time period for
each job.
[0119] Example 45 includes the subject matter of any of Examples
32-44, and wherein to determine the demand for each job further
comprises to apply an exponential averaging algorithm to the time
period in which each job resides in the job queue.
[0120] Example 46 includes an orchestrator server to dynamically
manage the allocation of accelerator resources, the orchestrator
server comprising circuitry for assigning a workload to a managed
node for execution; means for determining a predicted demand for
one or more accelerator resources to accelerate the execution of
one or more jobs within the workload; circuitry for provisioning,
by the orchestrator server and prior to the predicted demand, one
or more accelerator resources to accelerate the one or more jobs;
and circuitry for allocating the one or more provisioned
accelerator resources to the managed node to accelerate the
execution of the one or more jobs.
[0121] Example 47 includes the subject matter of Example 46, and
wherein the means for determining the predicted demand comprises
means for determining a demand for one or more field programmable
gate arrays (FPGAs).
[0122] Example 48 includes the subject matter of any of Examples 46
and 47, and wherein the circuitry for provisioning the one or more
accelerator resources comprises circuitry for providing, to the one
or more FPGAs, a bit stream indicative of a configuration of each
FPGA to accelerate execution of the one or more jobs.
[0123] Example 49 includes the subject matter of any of Examples
46-48, and wherein the means for determining the predicted demand
comprises means for determining the number of accelerator resources
to allocate to satisfy the predicted demand.
[0124] Example 50 includes the subject matter of any of Examples
46-49, and wherein the circuitry for provisioning the one or more
accelerator resources comprises circuitry for provisioning one or
more accelerator resources located on one or more sleds that are
different than a sled on which the workload is presently
executed.
[0125] Example 51 includes the subject matter of any of Examples
46-50, and further including circuitry for determining a
configuration time period to provision each of the one or more
accelerator resources; and means for determining a predicted time
of the predicted demand; and wherein the circuitry for provisioning
the one or more accelerator resources comprises circuitry for
beginning configuration of the one or more accelerator resources
for accelerated execution of the one or more jobs at a time that is
earlier than the predicted time by at least the configuration time
period.
[0126] Example 52 includes the subject matter of any of Examples
46-51, and further including means for identifying one or more jobs
within the workload to be accelerated with one or more field
programmable gate arrays (FPGAs); and circuitry for associating
each identified job with a globally unique identifier indicative of
one or more of a specific interface of the job or a definition of
the job.
[0127] Example 53 includes the subject matter of any of Examples
46-52, and wherein the circuitry for associating each identified
job with a globally unique identifier comprises circuitry for
associating each identified job with a globally unique identifier
indicative of one or more of a size of an input or a format of an
input to the job.
[0128] Example 54 includes the subject matter of any of Examples
46-53, and wherein the managed node is one of a plurality of
managed nodes and the workload is one of a plurality of workloads
executed by the managed nodes, the orchestrator server further
comprising circuitry for determining, for each workload, a local
count indicative of a number of times a job is executed in each
workload; circuitry for determining a global count indicative of a
number of times a job is executed by all of the managed nodes;
circuitry for determining whether one or more of the local count or
the global count satisfies a threshold count value; and circuitry
for identifying, in response to a determination that one or more of
the local count or the global count satisfies the threshold count
value, the associated job as a job to be accelerated.
[0129] Example 55 includes the subject matter of any of Examples
46-54, and further including circuitry for identifying, from a
plurality of accelerator resources, the one or more accelerator
resources to accelerate the one or more jobs.
[0130] Example 56 includes the subject matter of any of Examples
46-55, and wherein the circuitry for identifying the one or more
accelerator resources comprises circuitry for determining whether
one or more of the accelerator resources is already configured to
perform one or more of the jobs, the orchestrator server further
comprising circuitry for selecting, in response to a determination
that one or more the accelerator resources is already configured to
perform one or more of the jobs, the one or more already-configured
accelerator resources for acceleration of the one or more jobs.
[0131] Example 57 includes the subject matter of any of Examples
46-56, and wherein the circuitry for identifying the one or more
accelerator resources comprises circuitry for selecting the one or
more accelerator resources as a function of one or more of a target
heat generation, a target power usage, or a target economic cost of
utilization of the one or more accelerator resources.
[0132] Example 58 includes the subject matter of any of Examples
46-57, and wherein the managed node is one of a plurality of
managed nodes and the workload is one of a plurality of workloads
executed by the managed nodes, and wherein the means for
determining the demand comprises circuitry for establishing a job
queue indicative of all jobs for all of the workloads to be
performed; circuitry for determining an average time period in
which each job resides in the job queue; and circuitry for
determining the demand for each job as a function of the average
time period for each job.
[0133] Example 59 includes the subject matter of any of Examples
46-58, and wherein the means for determining the demand for each
job further comprises circuitry for applying an exponential
averaging algorithm to the time period in which each job resides in
the job queue.
* * * * *