U.S. patent application number 15/395988 was filed with the patent office on 2018-01-25 for memory module for a data center compute sled.
The applicant listed for this patent is INTEL CORPORATION. Invention is credited to MICHAEL CROCKER, AARON GORIUS, MOHAN J. KUMAR, MYLES WILDE, DIMITRIOS ZIAKAS.
Application Number | 20180024864 15/395988 |
Document ID | / |
Family ID | 60804962 |
Filed Date | 2018-01-25 |
United States Patent
Application |
20180024864 |
Kind Code |
A1 |
WILDE; MYLES ; et
al. |
January 25, 2018 |
Memory Module for a Data Center Compute Sled
Abstract
Examples may include a sled for a rack of a data center
including physical compute resources. The sled comprises a
processor component and a unitary memory module comprising a memory
controller and a quantity of memory based on the processor
component. The unitary memory module can comprise a quantity of
memory based on a number of cores of processor component to which
the unitary memory module is communicably coupled.
Inventors: |
WILDE; MYLES; (CHARLESTOWN,
MA) ; GORIUS; AARON; (UPTON, MA) ; CROCKER;
MICHAEL; (PORTLAND, OR) ; KUMAR; MOHAN J.;
(ALOHA, OR) ; ZIAKAS; DIMITRIOS; (HILLSBORO,
OR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
INTEL CORPORATION |
SANTA CLARA |
CA |
US |
|
|
Family ID: |
60804962 |
Appl. No.: |
15/395988 |
Filed: |
December 30, 2016 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62427268 |
Nov 29, 2016 |
|
|
|
62376859 |
Aug 18, 2016 |
|
|
|
62365969 |
Jul 22, 2016 |
|
|
|
Current U.S.
Class: |
711/105 |
Current CPC
Class: |
G06F 3/0655 20130101;
G08C 2200/00 20130101; G11C 5/06 20130101; G06F 2212/402 20130101;
H03M 7/6005 20130101; H04L 41/147 20130101; G06F 9/505 20130101;
H03M 7/4031 20130101; H04L 41/024 20130101; H04L 43/0817 20130101;
H04L 43/16 20130101; H04L 47/24 20130101; H04L 47/82 20130101; H04W
4/023 20130101; G06F 11/3414 20130101; H05K 7/2039 20130101; G06F
13/4068 20130101; G06F 3/061 20130101; G06F 3/0613 20130101; G06F
9/544 20130101; H05K 7/1489 20130101; H05K 7/20727 20130101; G06Q
50/04 20130101; G11C 5/02 20130101; H03M 7/4056 20130101; H04L
41/0813 20130101; G02B 6/4452 20130101; G06F 1/183 20130101; G06F
3/0625 20130101; G06F 9/5044 20130101; G06F 13/42 20130101; G06F
13/4282 20130101; G06Q 10/06314 20130101; H04L 43/08 20130101; H04L
67/1029 20130101; H05K 1/0203 20130101; H05K 7/1461 20130101; G06F
3/0665 20130101; G06F 12/0862 20130101; G06F 12/0893 20130101; G11C
14/0009 20130101; H04L 47/782 20130101; G02B 6/3893 20130101; G06F
9/3887 20130101; H03M 7/6023 20130101; H04L 9/0643 20130101; H04L
41/145 20130101; H04Q 11/0062 20130101; G06F 3/0664 20130101; G06F
2212/152 20130101; G08C 17/02 20130101; B65G 1/0492 20130101; G06F
12/10 20130101; H03M 7/3084 20130101; H04B 10/25 20130101; H04Q
2213/13523 20130101; H05K 7/1447 20130101; Y02D 10/00 20180101;
G06F 3/0688 20130101; G06F 2212/1041 20130101; G06F 2212/1044
20130101; G11C 11/56 20130101; H03M 7/4081 20130101; G06Q 10/087
20130101; H04L 47/805 20130101; H04Q 2011/0037 20130101; G06F
3/0619 20130101; G06F 2209/5019 20130101; H04L 49/25 20130101; H04L
49/555 20130101; H04L 67/306 20130101; G06F 3/0631 20130101; G06F
3/0673 20130101; G06F 9/4401 20130101; G06F 15/161 20130101; G06F
2212/1024 20130101; G06F 2212/401 20130101; G07C 5/008 20130101;
H04L 47/765 20130101; G06F 3/0658 20130101; G06F 13/385 20130101;
G06F 2212/7207 20130101; G11C 7/1072 20130101; H04L 41/5019
20130101; H04L 47/823 20130101; H05K 7/1442 20130101; H05K 7/1492
20130101; H05K 7/20709 20130101; H05K 7/20745 20130101; G06F 3/0647
20130101; G06Q 10/06 20130101; G06Q 10/20 20130101; H04L 41/0896
20130101; H04L 43/065 20130101; H04L 49/357 20130101; H04L 67/1012
20130101; H05K 5/0204 20130101; H05K 2201/10121 20130101; G06F
12/1408 20130101; G06F 13/409 20130101; G06F 15/8061 20130101; H04L
9/14 20130101; H04L 41/082 20130101; H04L 67/10 20130101; H04L
67/12 20130101; G06F 3/0638 20130101; G06F 3/0679 20130101; G06F
2212/1008 20130101; H03M 7/40 20130101; H04L 67/1004 20130101; H05K
13/0486 20130101; G06F 3/0653 20130101; G06F 9/4881 20130101; H03M
7/30 20130101; H04L 49/45 20130101; H04L 67/1097 20130101; H04L
67/34 20130101; H04L 69/329 20130101; H04Q 2213/13527 20130101;
G06F 13/1694 20130101; G06F 13/4022 20130101; G06F 16/9014
20190101; G06F 2209/5022 20130101; H04L 45/02 20130101; H04Q
2011/0041 20130101; H05K 7/1498 20130101; Y02P 90/30 20151101; H04L
49/15 20130101; H04Q 2011/0086 20130101; H05K 2201/10159 20130101;
G06F 9/30036 20130101; G06F 13/161 20130101; H03M 7/3086 20130101;
G05D 23/1921 20130101; G06F 11/141 20130101; G06F 2212/202
20130101; H04L 29/12009 20130101; H04L 67/1034 20130101; H05K
7/20836 20130101; H04L 41/12 20130101; H04L 45/52 20130101; H04Q
1/09 20130101; Y10S 901/01 20130101; G06F 3/0611 20130101; G06F
3/0616 20130101; G06F 3/065 20130101; G06F 9/5077 20130101; G06F
2209/483 20130101; H04L 67/16 20130101; H04Q 11/0071 20130101; H05K
1/181 20130101; H05K 7/1487 20130101; G06F 8/65 20130101; G06F
12/109 20130101; H04L 41/046 20130101; H05K 7/1491 20130101; G02B
6/3882 20130101; G02B 6/3897 20130101; G06F 3/064 20130101; H04Q
1/04 20130101; H04Q 2011/0079 20130101; G05D 23/2039 20130101; G06F
3/0689 20130101; H04L 49/00 20130101; H04L 67/02 20130101; H05K
7/1418 20130101; H05K 7/20736 20130101; H05K 2201/066 20130101;
G06F 3/0683 20130101; H04L 67/1014 20130101; H04L 69/04 20130101;
B25J 15/0014 20130101; G06F 1/20 20130101; G06F 3/0659 20130101;
H04L 49/35 20130101; H05K 7/1421 20130101; H05K 7/1485 20130101;
H04L 43/0876 20130101; H04L 47/38 20130101; H04L 67/1008 20130101;
Y04S 10/50 20130101; G02B 6/4292 20130101; G06F 3/067 20130101;
H04Q 11/00 20130101; H04Q 11/0003 20130101; H04Q 11/0005 20130101;
G06F 13/1668 20130101; H05K 2201/10189 20130101; G06F 9/5016
20130101; G06F 9/5027 20130101; G06F 9/5072 20130101; H04B 10/25891
20200501; H04L 9/3247 20130101; H04L 9/3263 20130101; H04L 43/0894
20130101; H04Q 2011/0052 20130101; H04Q 2011/0073 20130101; H04W
4/80 20180201; H05K 7/1422 20130101; H04L 12/2809 20130101 |
International
Class: |
G06F 9/50 20060101
G06F009/50; G11C 5/02 20060101 G11C005/02; G11C 11/56 20060101
G11C011/56; G11C 7/10 20060101 G11C007/10 |
Claims
1. An apparatus for a sled to house physical compute resources of a
data center, the apparatus comprising: a substrate; a first socket
to receive a processor component, the first socket disposed on a
first surface of the substrate; and a first memory socket to
receive a memory module, the first memory socket disposed on a
second surface of the substrate different than the first surface of
the substrate, the first memory socket to couple the memory module
to the processor component.
2. The apparatus of claim 1, wherein the first memory socket is
configured to receive a unitary memory module.
3. The apparatus of claim 2, comprising the processor component and
the unitary memory module.
4. The apparatus of claim 3, the unitary memory module comprising a
quantity of memory based in part on a number of cores of the
processor component.
5. The apparatus of claim 3, comprising: a processor component heat
sink mechanically coupled to the substrate and thermally coupled to
the processor component; and a unitary memory module heat sink
mechanically coupled to the substrate and thermally coupled to the
near memory unitary module.
6. The apparatus of claim 5, comprising the unitary memory module
heat sink removably mechanically coupled to the substrate.
7. The apparatus of claim 6, comprising a hinge coupled to the
substrate and a frame coupled to the hinge, the frame and hinge to
removably mechanically couple the unitary memory module to the
substrate.
8. The apparatus of claim 7, the frame and hinge to removably
mechanically couple the unitary memory module and the unitary
memory module heat sink to the substrate.
9. The apparatus of claim 1, comprising: a second socket to receive
a processor component, the second socket disposed on the first
surface of the substrate; and a second memory socket to receive a
unitary memory module, the second memory socket disposed on the
second surface of the substrate, the second memory socket to couple
the unitary memory module of the second memory socket to the
processor component of the second socket.
10. The apparatus of claim 1, the first memory socket comprising a
ball grid array (BGA) socket.
11. The apparatus of claim 1, the first surface and the second
surface opposite from each other.
12. The apparatus of claim 2, the processor component comprising
between 2 and 32 cores.
13. The apparatus of claim 12, the quantity of memory comprising
between 1 and 4 gigabytes of memory per core.
14. The apparatus of claim 1, the unitary memory module comprising
dynamic random access memory (DRAM) or three-dimensional (3D)
cross-point memory.
15. A system for a data center comprising: a rack comprising a
plurality of sled spaces; and at least one sled coupled to the rack
via a one of the plurality of sled spaces, the sled comprising: a
substrate; a first socket to receive a processor component, the
first socket disposed on a first surface of the substrate; and a
first memory socket to receive a memory module, the first memory
socket disposed on a second surface of the substrate different than
the first surface of the substrate, the first memory socket to
couple the memory module to the processor component.
16. The system of claim 15, wherein the first memory socket is
configured to receive a unitary memory module.
17. The system of claim 16, the sled comprising the processor
component and the unitary memory module.
18. The system of claim 17, the unitary memory module comprising a
quantity of memory based in part on a number of cores of the
processor component.
19. The system of claim 18, the sled comprising: a processor
component heat sink mechanically coupled to the substrate and
thermally coupled to the processor component; and a unitary memory
module heat sink mechanically coupled to the substrate and
thermally coupled to the unitary memory module.
20. The system of claim 19, the sled comprising the unitary memory
module heat sink removably mechanically coupled to the
substrate.
21. The system of claim 20, the sled comprising a hinge coupled to
the substrate and a frame coupled to the hinge, the frame and hinge
to removably mechanically couple the unitary memory module to the
substrate.
22. The system of claim 21, the frame and hinge to removably
mechanically couple the unitary memory module and the unitary
memory module heat sink to the substrate.
23. An apparatus for a physical resource sled in a data center,
comprising: a substrate mountable within a sled space of a rack of
a data center; a plurality of sockets coupled to the substrate,
each of the plurality of sockets to receive a processor component;
and a unitary memory module for each of the plurality of sockets,
the unitary memory module communicatively coupled to a respective
socket to couple the unitary memory module to a processor component
received by the socket, each of the unitary memory modules
comprising: a quantity of memory based in part on a number of cores
of the processor component; and a memory controller to couple the
quantity of memory to processor component.
24. The apparatus of claim 23, comprising a plurality of unitary
memory module heat sinks mechanically coupled to the substrate,
each of the plurality of unitary memory module heat sinks thermally
coupled to a respective one of the plurality of unitary memory
modules.
25. The apparatus of claim 23, the plurality of sockets disposed on
a first surface of the substrate and the plurality of unitary
memory modules disposed on a second surface of the substrate, the
first surface opposite from the second surface.
Description
RELATED APPLICATIONS
[0001] This application claims priority to: U.S. Provisional Patent
Application entitled "Framework and Techniques for Pools of
Configurable Computing Resources" filed on Nov. 29, 2016 and
assigned Ser. No. 62/427,268; U.S. Provisional Patent Application
entitled "Scalable System Framework Prime (SSFP) Omnibus
Provisional II" filed on Aug. 18, 2016 and assigned Ser. No.
62/376,859; and U.S. Provisional Patent Application entitled
"Framework and Techniques for Pools of Configurable Computing
Resources" filed on Jul. 22, 2016 and assigned Ser. No. 62/365,969,
all of which are hereby incorporated by reference in their
entirety.
TECHNICAL FIELD
[0002] Examples described herein are generally related to data
centers and particularly to compute sleds comprising physical
compute resources in a data center.
BACKGROUND
[0003] Advancements in networking have enabled the rise in pools of
configurable computing resources. A pool of configurable computing
resources may be formed from a physical infrastructure including
disaggregate physical resources, for example, as found in large
data centers. The physical infrastructure can include a number of
resources having processors, memory, storage, networking, power,
cooling, etc. Management entities of these data centers can
aggregate a selection of the resources to form servers and/or
computing hosts. These hosts can subsequently be allocated to
execute and/or host system SW (e.g., OSs, VMs, Containers,
Applications, or the like). The physical resources include
processors, which are often housed along with memory on a single
sled. The present disclosure is directed to such sleds comprising
processors and memory.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 illustrates a first example data center.
[0005] FIG. 2 illustrates a first example rack of a data
center.
[0006] FIG. 3 illustrates a second example rack.
[0007] FIG. 4 illustrates a third example rack.
[0008] FIG. 5 illustrates a first example sled.
[0009] FIG. 6 illustrates a second example sled.
[0010] FIG. 7 illustrates a third example sled.
[0011] FIG. 8A-8C illustrates portions of a fourth example
sled.
[0012] FIG. 9 illustrates a fifth example sled.
[0013] FIG. 10 illustrates a second example data center.
DETAILED DESCRIPTION
[0014] Data centers may generally be composed of a large number of
racks that can contain numerous types of hardware or configurable
resources (e.g., processing units, memory, storage, accelerators,
networking, fans/cooling modules, power units, etc.). The types of
hardware or configurable resources deployed in data centers may
also be referred to as physical resources or disaggregate elements.
It is to be appreciated, that the size and number of physical
resources within a data center can be large, for example, on the
order of hundreds of thousands of physical resources. Furthermore,
these physical resources can be pooled to form virtual computing
platforms for a large number and variety of computing tasks.
[0015] These physical resources are often arranged in racks within
a data center. The present disclosure provides racks arranged to
receive a number of sleds, where each sled can house a number of
physical resources. Some of the sleds in a data center can house
processor components, such as, central processing units (CPUs), or
the like. Such processing components are typically paired with
memory resources. For example, a CPU can be paired with memory to
facilitate operations (e.g., executing instructions, performing
processing operations, or the like). It is noted, that an ideal
amount or quantity of memory to pair with a processing component on
a sled can depend upon the data center implementation as well as
characteristics of the processing components, for example, the
number of processing cores. The present disclosure provides memory
modules having a particular amount of memory to pair with the
processor components of a sled.
[0016] Reference is now made to the drawings, wherein like
reference numerals are used to refer to like elements throughout.
In the following description, for purposes of explanation, numerous
specific details are set forth in order to provide a thorough
understanding thereof. It may be evident, however, that the novel
embodiments can be practiced without these specific details. In
other instances, known structures and devices are shown in block
diagram form in order to facilitate a description thereof. The
intention is to provide a thorough description such that all
modifications, equivalents, and alternatives within the scope of
the claims are sufficiently described.
[0017] Additionally, reference may be made to variables, such as,
"a", "b", "c", which are used to denote components where more than
one component may be implemented. It is important to note, that
there need not necessarily be multiple components and further,
where multiple components are implemented, they need not be
identical. Instead, use of variables to reference components in the
figures is done for convenience and clarity of presentation.
[0018] FIG. 1 illustrates a conceptual overview of a data center
100 that may generally be representative of a data center or other
type of computing network in/for which one or more techniques
described herein may be implemented according to various
embodiments. As shown in this figure, data center 100 may generally
contain a plurality of racks, each of which may house computing
equipment comprising a respective set of physical resources. In the
particular non-limiting example depicted in this figure, data
center 100 contains two racks 102A to 102B. Each of these two racks
102A to 102B may generally house a number of sleds. As shown in
this figure, each of racks 102A to 102B contains four sleds 104A-1
to 104A-4 and 104B-1 to 104B-4, respectively. The depicted sleds
and racks house computing equipment comprising respective sets of
physical resources 105A/B. In particular, physical resources 105A-1
to 105A-4 and 105B-1 to 105B-4 are depicted. A collective set of
physical resources 106 of data center 100 includes the various sets
of physical resources 105 (e.g., 105A-1 to 105A-4 and 105B-1 to
105B-4) that are distributed among racks 102A to 102B.
[0019] Physical resources 106 may include resources of multiple
types, such as--for example--processors, co-processors,
accelerators, field-programmable gate arrays (FPGAs), memory, and
storage. The embodiments are not limited to these examples. In this
particular non-limiting example, physical resources 105A may thus
be made up of the respective sets of physical resources housed in
rack 102A, which includes physical storage resources 105A-1,
physical accelerator resources 105A-2, physical memory resources
105A-3, and physical compute resources 105A-4 comprised in the
sleds 104A-1 to 104A-4 of rack 102A. In some implementations, a
rack may include a number of like physical resources. For example,
rack 102B is depicted including physical compute resources housed
in each of sleds 104B-1 to 104B-4 of rack 102B. More specifically,
sleds 104B-1 to 104B-4 respectively house, physical compute
resources 105B-1, physical compute resources 105B-2, physical
compute resources 105B-3, and physical compute resources
105B-4.
[0020] It is noted, that embodiments are not limited to this
example. Furthermore, each sled may contain a pool of each of the
various types of physical resources (e.g., compute, memory,
accelerator, storage). By having robotically accessible and
robotically manipulatable sleds comprising disaggregated resources,
each type of resource can be upgraded independently of each other
and at their own optimized refresh rate.
[0021] The illustrative data center 100 differs from typical data
centers in many ways. For example, in the illustrative embodiment,
the circuit boards ("sleds") on which components such as CPUs,
memory, and other components are placed are designed for increased
thermal performance. In particular, in the illustrative embodiment,
the sleds are shallower than typical boards. In other words, the
sleds are shorter from the front to the back, where cooling fans
are located. This decreases the length of the path that air must to
travel across the components on the board. Further, the components
on the sled are spaced further apart than in typical circuit
boards, and the components are arranged to reduce or eliminate
shadowing (i.e., one component in the air flow path of another
component). In the illustrative embodiment, processing components
such as the processors are located on a top side of a sled while
memory (e.g., unitary memory modules depicted herein (refer to
FIGS. 6-7 and FIGS. 8A-8C)) is located on a bottom side of the
sled. As a result of the enhanced airflow provided by this design,
at least some components may operate at higher frequencies and
power levels than in typical systems, thereby increasing
performance. Furthermore, the sleds are configured to blindly mate
with power and data communication cables in each rack 102A to 102B,
enhancing their ability to be quickly removed, upgraded,
reinstalled, and/or replaced. Similarly, individual components
located on the sleds, such as processors, accelerators, memory, and
data storage drives, are configured to be easily upgraded due to
their increased spacing from each other. In the illustrative
embodiment, the components additionally include hardware
attestation features to prove their authenticity.
[0022] Furthermore, in the illustrative embodiment, the data center
100 utilizes a single network architecture ("fabric") that supports
multiple other network architectures including Ethernet and
Omni-Path. The sleds, in the illustrative embodiment, are coupled
to switches via optical fibers, which provide higher bandwidth and
lower latency than typical twister pair cabling (e.g., Category 5,
Category 5e, Category 6, etc.). Due to the high bandwidth, low
latency interconnections and network architecture, the data center
100 may, in use, pool resources, such as memory, accelerators
(e.g., graphics accelerators, FPGAs, ASICs, etc.), and data storage
drives that are physically disaggregated, and provide them to
compute resources (e.g., processors) on an as needed basis,
enabling the compute resources to access the pooled resources as if
they were local.
[0023] More specifically, data center 100 may feature optical
fabric 112. Optical fabric 112 may generally comprise a combination
of optical signaling media (such as optical cabling) and optical
switching infrastructure via which any particular sled in data
center 100 can send signals to (and receive signals from) each of
the other sleds in data center 100. The signaling connectivity that
optical fabric 112 provides to any given sled may include
connectivity both to other sleds in a same rack and sleds in other
racks. In the particular non-limiting example depicted in this
figure, data center 100 comprises two racks (e.g., rack 102A to
102B) each including four sleds (e.g., 104A-1 to 104A-4 and 104B-1
to 104B-4, respectively). Thus, in this example, data center 100
comprises a total of eight sleds. Via optical fabric 112, each such
sled may possess signaling connectivity with each of the seven
other sleds in data center 100. For example, via optical fabric
112, sled 104A-1 in rack 102A may possess signaling connectivity
with sled 104A-2, 104A-3 and 104A-4 in rack 102A, as well as the
four other sleds 104B-1, 104B-2, 104B-3, and 104B-4 that are
distributed among the other rack 102B of data center 100. The
embodiments are not limited to this example.
[0024] In various embodiments, dual-mode optical switches (refer to
FIG. 5 and FIG. 9) may be capable of receiving both Ethernet
protocol communications carrying Internet Protocol (IP packets) and
communications according to a second, high-performance computing
(HPC) link-layer protocol (e.g., Intel's Omni-Path Architecture's,
Infiniband) via optical signaling media of optical fabric 112.
Thus, as depicted, with respect to any particular pair of sleds in
data center 100, signaling connectivity via the optical fabric may
provide support for link-layer connectivity via both Ethernet links
and HPC links. Thus, both Ethernet and HPC communications can be
supported by a single high-bandwidth, low-latency switch fabric.
The embodiments are not limited to this example. However, it is
worthy to note, that the dual-mode optical switches provide for
separate fault domains within a single sled. As such, information
can be written across fault domains at the sled level, as opposed
to the rack level to provide data loss, corruption, or failure
mitigation at the sled level.
[0025] The racks 102A and 102B of the data center 100 may include
physical design features that facilitate the automation of a
variety of types of maintenance tasks. For example, data center 100
may be implemented using racks that are designed to be
robotically-accessed, and to accept and house
robotically-manipulatable resource sleds. Furthermore, in the
illustrative embodiment, the racks 102A and 102B include integrated
power sources that receive a greater voltage than is typical for
power sources. In particular examples, each of the sleds can
include an associated power supply. The increased voltage enables
the power sources to provide additional power to the components on
each sled, enabling the components to operate at higher than
typical frequencies.
[0026] As noted, the present disclosure provides sleds housing
physical compute resources, such as, processor components and
memory. Furthermore, the present disclosure provides a unitary
memory module having a quantity or amount of memory capacity
suitable to the data center in which the sled is implemented.
Examples of such sleds are provides with respect to FIGS. 5-7,
FIGS. 8A-8C and FIG. 9. However, a number of example racks arranged
to house such sleds is given first, with respect to FIGS. 2-4. It
is noted, that the term "unitary" and particularly, "unitary
module" or "unitary memory module" is not meant to be limiting but
is instead, relied on to reference memory module packages as
described and depicted herein.
[0027] FIG. 2 illustrates a general overview of a rack architecture
200 that may be representative of an architecture of any particular
one of the racks depicted in FIG. 1, according to some embodiments.
As reflected in this figure, rack architecture 200 may generally
feature a plurality of sled spaces into which sleds may be
inserted, each of which may be robotically-accessible via a rack
access region 201. In the particular non-limiting example depicted
in this figure, rack architecture 200 features five sled spaces
203-1 to 203-5. Sled spaces 203-1 to 203-5 feature respective
multi-purpose connector modules (MPCMs) 216-1 to 216-5. These MPCMs
may be arranged to receive a corresponding MPCM of a sled (e.g.,
refer to FIG. 5) to mechanically, optically, and/or electrically
couple the sleds to rack architecture 200, and particularly to an
optical fabric of a data center to associated power sources for
each sled space 203-1 to 203-5.
[0028] FIG. 3 illustrates an example of a rack architecture 300
that may be representative of a rack architecture that may be
implemented in order to provide support for sleds featuring
expansion capabilities (e.g., refer to FIG. 9). In the particular
non-limiting example depicted in this figure, rack architecture 300
includes seven sled spaces 303-1 to 303-7, which feature respective
MPCMs 316-1 to 316-7. Sled spaces 303-1 to 303-7 include respective
primary regions 303-1A to 303-7A and respective expansion regions
303-1B to 303-7B. With respect to each such sled space, when the
corresponding MPCM is coupled with a counterpart MPCM of an
inserted sled, the primary region may generally constitute a region
of the sled space that physically accommodates the inserted sled.
The expansion region may generally constitute a region of the sled
space that can physically accommodate an expansion module (e.g.,
housing additional and/or supplemental physical resources to couple
with physical resources of the main sled), in the event that the
inserted sled is configured with such a module.
[0029] FIG. 4 illustrates an example of a rack 402 that may be
representative of a rack implemented according to rack architecture
300 of FIG. 3 according to some embodiments. In the particular
non-limiting example depicted in FIG. 4, rack 402 features seven
sled spaces 403-1 to 403-7, which include respective primary
regions 403-1A to 403-7A and respective expansion regions 403-1B to
403-7B. In various embodiments, temperature control in rack 402 may
be implemented using an air cooling system. For example, as
reflected in this figure, rack 402 may feature a plurality of fans
419 that are generally arranged to provide air cooling within the
various sled spaces 403-1 to 403-7. In some embodiments, the height
of the sled space is greater than the conventional "1U" server
height. In such embodiments, fans 419 may generally comprise
relatively slow, large diameter cooling fans as compared to fans
used in conventional rack configurations. Running larger diameter
cooling fans at lower speeds may increase fan lifetime relative to
smaller diameter cooling fans running at higher speeds while still
providing the same amount of cooling. The sleds are physically
shallower than conventional rack dimensions. Further, components
are arranged on each sled to reduce thermal shadowing (i.e., not
arranged serially in the direction of air flow). As a result, the
wider, shallower sleds allow for an increase in device performance
because the devices can be operated at a higher thermal envelope
(e.g., 250 W) due to improved cooling (i.e., no thermal shadowing,
more space between devices, more room for larger heat sinks,
etc.).
[0030] MPCMs 416-1 to 416-7 may be configured to provide inserted
sleds with access to power sourced by respective power modules
420-1 to 420-7, each of which may draw power from an external power
source 421. In various embodiments, external power source 421 may
deliver alternating current (AC) power to rack 402, and power
modules 420-1 to 420-7 may be configured to convert such AC power
to direct current (DC) power to be sourced to inserted sleds. In
some embodiments, for example, power modules 420-1 to 420-7 may be
configured to convert 277-volt AC power into 12-volt DC power for
provision to inserted sleds via respective MPCMs 416-1 to 416-7.
The embodiments are not limited to this example.
[0031] MPCMs 416-1 to 416-7 may also be arranged to provide
inserted sleds with optical signaling connectivity to an optical
fabric, which may be the same as--or similar to--optical fabric 112
of FIG. 1. In various embodiments, optical connectors contained in
MPCMs 416-1 to 416-7 may be designed to couple with counterpart
optical connectors contained in MPCMs of inserted sleds to provide
such sleds with optical signaling connectivity to optical fabric
412 via respective lengths of optical cabling 422-1 to 422-7. In
some embodiments, each such length of optical cabling may extend
from its corresponding MPCM to an optical interconnect loom 423
that is external to the sled spaces of rack 402. In various
embodiments, optical interconnect loom 423 may be arranged to pass
through a support post or other type of load-bearing element of
rack 402. The embodiments are not limited in this context. Because
inserted sleds connect to an optical switching infrastructure via
MPCMs, the resources typically spent in manually configuring the
rack cabling to accommodate a newly inserted sled can be saved.
[0032] FIG. 5 illustrates an example of a sled 504 that may be
representative of a sled designed for use in conjunction with a
rack according to some embodiments (e.g., racks according to rack
architectures 200 or 300 or rack 402). Sled 504 may feature an MPCM
516 that comprises an optical connector 516A and a power connector
516B, and that is designed to couple with a counterpart MPCM of a
sled space in conjunction with insertion of MPCM 516 into that sled
space. Coupling MPCM 516 with such a counterpart MPCM may cause
power connector 516B to couple with a power connector comprised in
the counterpart MPCM. This may generally enable physical compute
resources 505 of sled 504 to source power from an external source,
via power connector 516B and power transmission media 524 that
conductively couples power connector 516B to physical compute
resources 505.
[0033] Physical compute resources 505 can generally include any
number of processor component and associated memory. For example,
physical compute resources 505 includes processor components 533-1
and 533-2 and memory 535-1 and 535-2. Processor component 533-1 is
operably coupled to memory 535-1 via electrical signaling media 528
while processor component 533-2 is operably coupled to memory 535-2
via electrical signaling media 528.
[0034] In general, processor components 533-1 can be any of a
variety of processors, such as, central processing units (CPUs),
graphics processing units (GPUs), field-programmable gate arrays
(FPGAs) or the like. In this illustrative example, processor
components 533-1 and 533-2 can be central processing units
comprising a number of processing cores. For example, each of
processing components 533-1 and 533-2 can have any number of cores,
even a different number of cores. As a specific example, each of
processing component 633-1 and 533-2 can have 2 cores, 4 cores, 8
cores, 12 cores, 24 cores, 32 cores, or the like. In this
illustrative example, processor components 533-1 and 533-2 are
depicted including 4 cores each. Specifically, processor component
533-1 is depicted including 4 cores 580-1 while processor component
533-2 is depicted including 4 cores 580-2. Examples are however,
not limited in this context. Furthermore, processing components
533-1 and 533-2 can be an x86 (e.g., 32 bit, 64, bit, or the like)
based processor manufactured at any of a variety of device
fabrication nodes, such as, for example, 7 nanometer (nm) node, 10
nm node, 14 nm node, 22 nm, 32 nm, 45 nm node, or the like.
Furthermore, the processing components can be packed in any of a
variety of package types having various pin counts. Examples are
not limited in this context.
[0035] As will be described in greater detail below, memory 535-1
and 535-2 can be embodied in a ball grid array (BGA) package and
referred to as a "unitary module" or a "unitary memory module."
Furthermore, each of the unitary memory modules 535-1 and 535-2 can
include a controller (e.g., memory controller, or the like) and a
memory. For example, unitary memory module 535-1 can include
controller 590-1 and memory 592-1 while unitary memory module 535-2
can include controller 590-2 and memory 592-2. In general, memory
535-1 and memory 535-2 (or more particularly, memory 592-1 and
592-2) can be any of a variety of types of memory, including
volatile memory, non-volatile memory, etc.
[0036] In some examples, sled 502 can comprise two levels of memory
(sometimes referred to as `2LM`). A first level of the 2LM
architecture can comprise smaller faster memory while a second
level of memory can comprise larger and slower memory, relative to
the first level. In some cases, the first level of memory can be
referred to as near memory while the second level of memory can be
referred to as far memory. With some examples, the unitary memory
modules 535-1 and 535-2 can be implemented as near memory for
corresponding processor components 533-1 and 533-2.
[0037] For example, unitary memory modules 535-1 and 535-2 (and
particularly memory 592-1 and 592-2) can be implemented from
random-access memory (RAM), dynamic RAM (DRAM), synchronous DRAM
(SDRAM), double-data rate SDRAM, NAND memory, NOR memory,
three-dimensional (3D) cross-point memory, ferroelectric memory,
silicon-oxide-nitride-oxide-silicon (SONOS) memory, polymer memory
such as ferroelectric polymer memory, ferroelectric transistor
random access memory (FeTRAM or FeRAM), nanowire, phase-change RAM
(PRAM), resistive RAM (RRAM), magnetoresistive RAM (MRAM), spin
transfer torque MRAM (STT-MRAM) memory, non-volatile static RAM
(nvSRAM), conductive-bridging RAM (CBRAM), nano-RAM (NRAM),
floating junction gate RAM (FJG RAM), or the like.
[0038] Unitary memory modules 535-1 and 535-2 can include a
quantity of memory (e.g., memory 592-1 and 592-2, respectively)
based on processor component 533-1 and 533-2 to which unitary
memory modules are coupled. For example, unitary memory modules
535-1 and 535-2 can include between 2 and 4 Gigabytes (GB) of
memory for each core of processor component 533-1 or 533-2 to which
unitary memory module 535-1 and 535-2 are attached. As a specific
example, each of processor components 533-1 and 533-2 can include
32 cores while each unitary memory module includes 96 GB of memory
533-1 and 533-2, which equates to 3 GB of memory per core.
[0039] The present disclosure can provide a sled having unitary
memory modules (e.g., unitary memory modules 535-1 and 535-2, or
the like) arranged and configured to be removed in an autonomous
process, such as, for example, by a robot operating in a data
center. Examples are not limited in this context.
[0040] Sled 504 may also include dual-mode optical network
interface circuitry 526. Dual-mode optical network interface
circuitry 526 may generally comprise circuitry that is capable of
communicating over optical signaling media according to each of
multiple link-layer protocols supported by an optical fabric (e.g.,
optical fabric 112 of FIG. 1, optical fabric 414 of FIG. 4, or the
like). In some embodiments, dual-mode optical network interface
circuitry 526 may be capable both of Ethernet protocol
communications and of communications according to a second,
high-performance protocol. In various embodiments, dual-mode
optical network interface circuitry 526 may include one or more
optical transceiver modules 527, each of which may be capable of
transmitting and receiving optical signals over each of one or more
optical channels. The embodiments are not limited in this
context.
[0041] Coupling MPCM 516 with a counterpart MPCM of a sled space in
a given rack may cause optical connector 516A to couple with an
optical connector comprised in the counterpart MPCM. This may
generally establish optical connectivity between optical cabling of
the sled and dual-mode optical network interface circuitry 526, via
each of a set of optical channels 525. With some examples, optical
channels 525 comprise 4 optical fiber channels. With some examples,
each of the optical channels can provide between 20 and 220
Gigabytes per second (GB/s) bandwidth. With a specific example,
each of the optical channels can provide 50 GB/s bandwidth. As
another specific example, each of the optical channels can provide
200 GB/s bandwidth. Dual-mode optical network interface circuitry
526 may communicate with the physical resources 505 of sled 504 via
electrical signaling media 528. In addition to the dimensions of
the sleds and arrangement of components on the sleds to provide
improved cooling and enable operation at a relatively higher
thermal envelope (e.g., 250 W), as described above with reference
to FIG. 4, in some embodiments, a sled may include one or more
additional features to facilitate air cooling, such as a heat pipe
and/or heat sinks arranged to dissipate heat generated by physical
resources 505. It is worthy of note that although the example sled
504 depicted in FIG. 5 does not feature an expansion connector, any
given sled that features the design elements of sled 504 may also
feature an expansion connector according to some embodiments. The
embodiments are not limited in this context.
[0042] FIG. 6 depicts a perspective view of an example sled 604. It
is noted, that the example sled 604 is not depicted to scale and
particularly, features of sled 604 are depicted in exaggerated form
to facilitate understanding. The example sled 604 includes a
substrate 640, processor components 633-1 and 633-2, and unitary
memory modules 635-1 and 635-2. Processor components 633-1 and
633-2 can be any of a variety of processor components, and can be
like--the processor components 533-1 and 533-2 depicted and
described with respect to FIG. 5. Furthermore, sled 604 can include
processor component heat sinks (refer to FIG. 7) thermally coupled
to processor components 633-1 and 633-2 to dissipate thermal energy
generated by processor component 633-1 and 633-2 during
operation.
[0043] Unitary memory modules 635-1 and 635-2 can be memory (e.g.,
DRAM, or the like) packaged in unitary modules and coupled to
respective ones of processor components 633-1 and 633-2. In
general, unitary memory modules 635-1 and 635-2 can be packaged
into a BGA package suitable to couple to a socket disposed on
substrate 640 (refer to FIG. 8A-8C). Sled 604 can include unitary
module heat sinks (refer to FIG. 7) thermally coupled to unitary
memory modules 635-1 and 635-2 to dissipate thermal energy
generated by unitary memory modules 635-1 and 635-2 during
operation.
[0044] In general, processor components 633-1 and 633-2 (and
associated heat sinks) can be disposed on a first side (e.g., upper
surface in this example) of substrate 640. Furthermore, unitary
memory modules 635-1 and 635-2 can be disposed on a second side
(e.g., lower surface in this example) of substrate 640. It is
noted, that the first side and second side, or first surface of
substrate and second surface of substrate 640, to which processor
components 633-1 and 633-2 and unitary memory modules 635-1 and
635-2 are respectively coupled can be opposite from each other.
Said differently, computing resources (e.g., processor components,
or the like) can be disposed on the upper surface of the sled 604
while memory for processor components (e.g., unitary memory
modules, or the like) can be disposed on the lower surface of sled
604. The sled 604 can further comprise (not shown) circuit boards
and/or connective components to provide connectivity between the
processor components and memory, as well as other interconnects of
the sled 604 (e.g., optical interconnects, or the like).
[0045] FIG. 7 depicts a perspective view of an example sled 704. It
is noted, that the example sled 704 is not depicted to scale. The
example sled 704 includes a substrate 704, processor components
733-1 and 733-2 (obscured in this view), unitary memory module
735-1, unitary memory module 735-2 (obscured in this view),
processor component heat sinks 737-1 and 737-2, unitary module heat
sink 739-1 (not shown) and unitary module heat sink 739-2.
[0046] Processor components 733-1 and 733-2 can be any of a variety
of processor components, and can, be like, the processor components
533-1 and 533-2 depicted and described with respect to FIG. 5.
Furthermore, sled 704 can include processor component heat sinks
737-1 and 737-2 thermally coupled to respective processor
components 733-1 and 733-2 to dissipate thermal energy generated by
processor components 733-1 and 733-2 during operation. It is noted,
that processor components 733-1 and 733-2 are obscured from view by
the processor component heat sinks 737-1 and 737-2.
[0047] Unitary memory modules 735-1 and 735-2 can be memory (e.g.,
DRAM, or the like) packaged in unitary modules and coupled to
respective ones of processor components 733-1 and 733-2. In
general, unitary memory modules 735-1 and 735-2 can be packaged
into a BGA package suitable to couple to a socket disposed on
substrate 740 (refer to FIG. 8A-8C). For example, unitary module
BGA package 750-1 is depicted comprising a memory array 752-1 on
one side and BGA contacts (obscured in this view) on a side facing
the substrate 740. As described above, unitary memory modules can
include a quantity of memory based in part on the number of cores
of a processor component to which the unitary memory module is
coupled. As such, in this illustrative example, memory array 752-1
could include a quantity of memory (e.g., GBs of DRAM, or the like)
based in part on the number of cores of processor component
733-1.
[0048] Sled 704 can include unitary module heat sinks 739-1 (not
shown) and 739-2 thermally coupled to unitary memory modules 735-1
and 735-2, respectively. Unitary module heat sinks 739-1 and 739-2
can dissipate thermal energy generated by unitary memory modules
735-1 and 735-2 during operation.
[0049] In general, heat sinks (e.g., 737-1, 737-2, 739-1, 739-2, or
the like) can be mechanically coupled to substrate 740 and
thermally coupled to an active component (e.g., processor component
733-1, processor component 733-2, unitary memory module 735-1,
unitary memory module 735-2, or the like) via any of a variety of
methods, such as, for example, screws, hold-downs, springs, frames,
buttons, thermal paste, surface contact, or the like.
[0050] In general, processor components 733-1 and 733-2 and
associated heat sinks 737-1 and 737-2 can be disposed on a first
side (e.g., upper surface in this example) of substrate 740.
Furthermore, unitary memory modules 735-1 and 735-2 and associated
heat sinks 739-1 and 739-2 can be disposed on a second side (e.g.,
lower surface in this example) of substrate 740. It is noted, that
the first side and second side, or first surface of substrate and
second surface of substrate to which processor components 733-1 and
733-2 and unitary memory modules 735-1 and 735-2 are respectively
coupled can be opposite from each other. Said differently,
computing resources (e.g., processor components, or the like) can
be disposed on the upper surface of the sled 704 while memory for
processor components (e.g., unitary memory modules, or the like)
can be disposed on the lower surface of sled 704. The sled 704 can
further comprise (not shown) circuit boards and/or connective
components to provide connectivity between the processor components
and memory, as well as other interconnects of the sled 704 (e.g.,
optical interconnects, or the like).
[0051] FIGS. 8A-8C depict a perspective view of an example sled 804
and removal (or installation) of a unitary memory module. It is
noted, that the example sled 804 is not depicted to scale and does
not depict all elements that can be implemented on such an example
sled. In particular, this illustrative example depicts sled 804 (or
a portion of sled 804) including substrate 840 and features a
unitary memory module. Turning more specifically to FIG. 8A,
substrate 840 is depicted with unitary module heat sink 839 coupled
to substrate 840. In particular, unitary module heat sink 839 is
coupled to substrate 840 via heat sink fasteners 862. In some
examples, heat sink fasteners 862 can be arranged and/or configured
to provide automated removal and installation of heat sink 839 to
substrate 840. For example, heat sink 839 and fasteners 862 can be
arranged and configured to be removed by an autonomous apparatus,
such as a robot.
[0052] Turning more specifically to FIG. 8B, substrate 840 is
depicted with unitary module heat sink 839 removed such that
unitary module package 850 is illustrated. As depicted, unitary
module package 850 is coupled to substrate 840. Unitary module
package is coupled to substrate 840 via a hinge 870 and a frame
872. In some examples, frame 872 can be arranged and/or configured
to provide automated removal and installation of unitary module
package 850 to substrate 840. For example, frame 872 can be
manipulated about hinge 870 by an autonomous apparatus, such as a
robot.
[0053] Turning more specifically to FIG. 8C, substrate 840 is
depicted with unitary module package 850 removed such that BGA
contacts 842 are illustrated. As depicted, BGA contacts 842 are
arranged on substrate 840 to provide electrical coupling between
unitary module package 850 and a corresponding processor component
of sled 804. It is noted, that BGA contacts 842 are depicted in an
arrangement (e.g., row, column, etc.) and at a quantity to
facilitate understanding. However, in practice, the BGA array 842
can have any shape and number of individual contacts suitable to
operably couple unitary modules to processor components. Examples
are not limited in this context.
[0054] As noted above, with some examples, a sled can be arranged
to accept an expansion sled. FIG. 9 illustrates an example of a
sled 904 that may be representative of a sled of such a type. As
shown in this figure, sled 904 may comprise a set of physical
resources 905, as well as an MPCM 916 designed to couple with a
counterpart MPCM when sled 904 is inserted into a sled space such
as any of sled spaces 303-1 to 303-7 of FIG. 3. Sled 904 can also
feature dual-mode optical network interface circuitry 926 to couple
components of sled 904 to optical fabric of a data center.
[0055] Sled 904 may also feature an expansion connector 917.
Expansion connector 917 may generally comprise a socket, slot, or
other type of connection element that is capable of accepting one
or more types of expansion modules, such as an expansion sled 918.
By coupling with a counterpart connector on expansion sled 918,
expansion connector 917 may provide physical resources 905 with
access to supplemental physical resources 905B residing on
expansion sled 918.
[0056] For example, physical resources 905 can comprise physical
compute resources, such as, processor component(s) 933 and unitary
memory module(s) 935. Additional processor components (e.g.,
co-processor, accelerators, GPUS processors, or the like) or memory
(e.g., far memory, or the like) to be included in physical
resources can be provided via supplemental physical resources 905B
on expansion sled 918.
[0057] FIG. 10 illustrates an example of a data center 1000 that
may generally be representative of one in/for which one or more
techniques described herein may be implemented according to various
embodiments. As reflected in this figure, a physical infrastructure
management framework 1050A may be implemented to facilitate
management of a physical infrastructure 1000A of data center 1000.
In various embodiments, one function of physical infrastructure
management framework 1050A may be to manage automated maintenance
functions within data center 1000, such as the use of robotic
maintenance equipment to service computing equipment within
physical infrastructure 1000A. In some embodiments, physical
infrastructure 1000A may feature an advanced telemetry system that
performs telemetry reporting that is sufficiently robust to support
remote automated management of physical infrastructure 1000A. In
various embodiments, telemetry information provided by such an
advanced telemetry system may support features such as failure
prediction/prevention capabilities and capacity planning
capabilities. In some embodiments, physical infrastructure
management framework 1050A may also be configured to manage
authentication of physical infrastructure components using hardware
attestation techniques. For example, robots may verify the
authenticity of components before installation by analyzing
information collected from a radio frequency identification (RFID)
tag associated with each component to be installed. The embodiments
are not limited in this context.
[0058] As shown in this figure, the physical infrastructure 1000A
of data center 1000 may comprise an optical fabric 1012, which may
include a dual-mode optical switching infrastructure 1014. Optical
fabric 1012 and dual-mode optical switching infrastructure 1014 may
be the same as--or similar to--optical fabric 102 of FIG. 1 or 412
of FIG. 4, and may provide high-bandwidth, low-latency,
multi-protocol connectivity among sleds of data center 1000. As
discussed above, with reference to FIG. 1, in various embodiments,
the availability of such connectivity may make it feasible to
disaggregate and dynamically pool resources such as accelerators,
memory, and storage. In some embodiments, for example, one or more
pooled accelerator sleds 1030 may be included among the physical
infrastructure 1000A of data center 1000, each of which may
comprise a pool of accelerator resources--such as co-processors
and/or FPGAs, for example--that is available globally accessible to
other sleds via optical fabric 1012 and dual-mode optical switching
infrastructure 1014.
[0059] In another example, in various embodiments, one or more
pooled storage sleds 1032 may be included among the physical
infrastructure 1000A of data center 1000, each of which may
comprise a pool of storage resources that is available globally
accessible to other sleds via optical fabric 1012 and dual-mode
optical switching infrastructure 1014. In some embodiments, such
pooled storage sleds 1032 may comprise pools of solid-state storage
devices such as solid-state drives (SSDs). In various embodiments,
one or more high-performance processing sleds 1034 may be included
among the physical infrastructure 1000A of data center 1000. In
some embodiments, high-performance processing sleds 1034 may
comprise pools of high-performance processors, as well as cooling
features that enhance air cooling to yield a higher thermal
envelope of up to 250 W or more. In various embodiments, any given
high-performance processing sled 1034 may feature an expansion
connector 1017 that can accept a far memory expansion sled, such
that the far memory that is locally available to that
high-performance processing sled 1034 is disaggregated from the
processors and memory comprised on that sled. In some embodiments,
such a high-performance processing sled 1034 may be configured with
far memory using an expansion sled that comprises low-latency SSD
storage. The optical infrastructure allows for compute resources on
one sled to utilize remote accelerator/FPGA, memory, and/or SSD
resources that are disaggregated on a sled located on the same rack
or any other rack in the data center. The remote resources can be
located one switch jump away or two-switch jumps away in the
spine-leaf network architecture. The embodiments are not limited in
this context.
[0060] In various embodiments, one or more layers of abstraction
may be applied to the physical resources of physical infrastructure
1000A in order to define a virtual infrastructure, such as a
software-defined infrastructure 1000B. In some embodiments, virtual
computing resources 1036 of software-defined infrastructure 1000B
may be allocated to support the provision of cloud services 1040.
In various embodiments, particular sets of virtual computing
resources 1036 may be grouped for provision to cloud services 1040
in the form of SDI services 1038. Examples of cloud services 1040
may include--without limitation--software as a service (SaaS)
services 1042, platform as a service (PaaS) services 1044, and
infrastructure as a service (IaaS) services 1046.
[0061] In some embodiments, management of software-defined
infrastructure 1000B may be conducted using a virtual
infrastructure management framework 1050B. In various embodiments,
virtual infrastructure management framework 1050B may be designed
to implement workload fingerprinting techniques and/or
machine-learning techniques in conjunction with managing allocation
of virtual computing resources 1036 and/or SDI services 1038 to
cloud services 1040. In some embodiments, virtual infrastructure
management framework 1050B may use/consult telemetry data in
conjunction with performing such resource allocation. In various
embodiments, an application/service management framework 1050C may
be implemented in order to provide QoS management capabilities for
cloud services 1040. The embodiments are not limited in this
context.
[0062] One or more aspects of at least one example may be
implemented by representative instructions stored on at least one
machine-readable medium which represents various logic within the
processor, which when read by a machine, computing device or system
causes the machine, computing device or system to fabricate logic
to perform the techniques described herein. Such representations,
known as "IP cores" may be stored on a tangible, machine readable
medium and supplied to various customers or manufacturing
facilities to load into the fabrication machines that actually make
the logic or processor.
[0063] Various examples may be implemented using hardware elements,
software elements, or a combination of both. In some examples,
hardware elements may include devices, components, processors,
microprocessors, circuits, circuit elements (e.g., transistors,
resistors, capacitors, inductors, and so forth), integrated
circuits, application specific integrated circuits (ASICs),
programmable logic devices (PLDs), digital signal processors
(DSPs), field programmable gate array (FPGA), memory units, logic
gates, registers, semiconductor device, chips, microchips, chip
sets, and so forth. In some examples, software elements may include
software components, programs, applications, computer programs,
application programs, system programs, machine programs, operating
system software, middleware, firmware, software modules, routines,
subroutines, functions, methods, procedures, software interfaces,
application program interfaces (API), instruction sets, computing
code, computer code, code segments, computer code segments, words,
values, symbols, or any combination thereof. Determining whether an
example is implemented using hardware elements and/or software
elements may vary in accordance with any number of factors, such as
desired computational rate, power levels, heat tolerances,
processing cycle budget, input data rates, output data rates,
memory resources, data bus speeds and other design or performance
constraints, as desired for a given implementation.
[0064] Some examples may include an article of manufacture or at
least one computer-readable medium. A computer-readable medium may
include a non-transitory storage medium to store logic. In some
examples, the non-transitory storage medium may include one or more
types of computer-readable storage media capable of storing
electronic data, including volatile memory or non-volatile memory,
removable or non-removable memory, erasable or non-erasable memory,
writeable or re-writeable memory, and so forth. In some examples,
the logic may include various software elements, such as software
components, programs, applications, computer programs, application
programs, system programs, machine programs, operating system
software, middleware, firmware, software modules, routines,
subroutines, functions, methods, procedures, software interfaces,
API, instruction sets, computing code, computer code, code
segments, computer code segments, words, values, symbols, or any
combination thereof.
[0065] According to some examples, a computer-readable medium may
include a non-transitory storage medium to store or maintain
instructions that when executed by a machine, computing device or
system, cause the machine, computing device or system to perform
methods and/or operations in accordance with the described
examples. The instructions may include any suitable type of code,
such as source code, compiled code, interpreted code, executable
code, static code, dynamic code, and the like. The instructions may
be implemented according to a predefined computer language, manner
or syntax, for instructing a machine, computing device or system to
perform a certain function. The instructions may be implemented
using any suitable high-level, low-level, object-oriented, visual,
compiled and/or interpreted programming language.
[0066] Some examples may be described using the expression "in one
example" or "an example" along with their derivatives. These terms
mean that a particular feature, structure, or characteristic
described in connection with the example is included in at least
one example. The appearances of the phrase "in one example" in
various places in the specification are not necessarily all
referring to the same example.
[0067] Some examples may be described using the expression
"coupled" and "connected" along with their derivatives. These terms
are not necessarily intended as synonyms for each other. For
example, descriptions using the terms "connected" and/or "coupled"
may indicate that two or more elements are in direct physical or
electrical contact with each other. The term "coupled," however,
may also mean that two or more elements are not in direct contact
with each other, but yet still co-operate or interact with each
other.
[0068] It is emphasized that the Abstract of the Disclosure is
provided to comply with 37 C.F.R. Section 1.72(b), requiring an
abstract that will allow the reader to quickly ascertain the nature
of the technical disclosure. It is submitted with the understanding
that it will not be used to interpret or limit the scope or meaning
of the claims. In addition, in the foregoing Detailed Description,
it can be seen that various features are grouped together in a
single example for the purpose of streamlining the disclosure. This
method of disclosure is not to be interpreted as reflecting an
intention that the claimed examples require more features than are
expressly recited in each claim. Rather, as the following claims
reflect, inventive subject matter lies in less than all features of
a single disclosed example. Thus the following claims are hereby
incorporated into the Detailed Description, with each claim
standing on its own as a separate example. In the appended claims,
the terms "including" and "in which" are used as the plain-English
equivalents of the respective terms "comprising" and "wherein,"
respectively. Moreover, the terms "first," "second," "third," and
so forth, are used merely as labels, and are not intended to impose
numerical requirements on their objects.
[0069] Although the subject matter has been described in language
specific to structural features and/or methodological acts, it is
to be understood that the subject matter defined in the appended
claims is not necessarily limited to the specific features or acts
described above. Rather, the specific features and acts described
above are disclosed as example forms of implementing the
claims.
[0070] The present disclosure can be implemented in any of a
variety of embodiments, such as, for example, the following
non-exhaustive listing of example embodiments.
Example 1
[0071] An apparatus for a sled to house physical compute resources
of a data center, the apparatus comprising: a substrate; a first
socket to receive a processor component, the first socket disposed
on a first surface of the substrate; and a first memory socket to
receive a memory module, the first memory socket disposed on a
second surface of the substrate different than the first surface of
the substrate, the first memory socket to couple the memory module
to the processor component.
Example 2
[0072] The apparatus of example 1, wherein the first memory socket
is configured to receive a unitary memory module.
Example 3
[0073] The apparatus of example 2, comprising the processor
component and the unitary memory module.
Example 4
[0074] The apparatus of example 3, the unitary memory module
comprising a quantity of memory based in part on a number of cores
of the processor component.
Example 5
[0075] The apparatus of example 3, comprising a processor component
heat sink mechanically coupled to the substrate and thermally
coupled to the processor component.
Example 6
[0076] The apparatus of example 5, comprising a unitary memory
module heat sink mechanically coupled to the substrate and
thermally coupled to the unitary memory module.
Example 7
[0077] The apparatus of example 6, comprising the unitary memory
module heat sink removably mechanically coupled to the
substrate.
Example 8
[0078] The apparatus of example 7, comprising a hinge coupled to
the substrate and a frame coupled to the hinge, the frame and hinge
to removably mechanically couple the unitary memory module to the
substrate.
Example 9
[0079] The apparatus of example 8, the frame and hinge to removably
mechanically couple the unitary memory module and the unitary
memory module heat sink to the substrate.
Example 10
[0080] The apparatus of example 2, comprising: a second socket to
receive a processor component, the second socket disposed on the
first surface of the substrate; and a second memory socket to
receive a unitary memory module, the second memory socket disposed
on the second surface of the substrate, the second memory socket to
couple the unitary memory module of the second memory socket to the
processor component of the second socket.
Example 11
[0081] The apparatus of any one of examples 1 to 10, the first
memory socket comprising a ball grid array (BGA) socket.
Example 12
[0082] The apparatus of any one of examples 1 to 10, the first
surface and the second surface opposite from each other.
Example 13
[0083] The apparatus of example 3, the processor component
comprising between 2 and 32 cores.
Example 14
[0084] The apparatus of example 13, the quantity of memory
comprising between 1 and 4 gigabytes of memory per core.
Example 15
[0085] The apparatus of any one of examples 2 to 10, the unitary
memory module comprising dynamic random access memory (DRAM) or
three-dimensional (3D) cross-point memory.
Example 16
[0086] A system for a data center comprising: a rack comprising a
plurality of sled spaces; and at least one sled coupled to the rack
via a one of the plurality of sled spaces, the sled comprising: a
substrate; a first socket to receive a processor component, the
first socket disposed on a first surface of the substrate; and a
first memory socket to receive a memory module, the first memory
socket disposed on a second surface of the substrate different than
the first surface of the substrate, the first memory socket to
couple the memory module to the processor component.
Example 17
[0087] The system of example 16, wherein the first memory socket is
configured to receive a unitary memory module.
Example 18
[0088] The system of example 17, the sled comprising the processor
component and the unitary memory module.
Example 19
[0089] The system of example 18, the unitary memory module
comprising a quantity of memory based in part on a number of cores
of the processor component.
Example 20
[0090] The system of example 18, the sled comprising a processor
component heat sink mechanically coupled to the substrate and
thermally coupled to the processor component.
Example 21
[0091] The system of example 20, the sled comprising a unitary
memory module heat sink mechanically coupled to the substrate and
thermally coupled to the unitary memory module.
Example 22
[0092] The system of example 21, the sled comprising the unitary
memory module heat sink removably mechanically coupled to the
substrate.
Example 23
[0093] The system of example 22, the sled comprising a hinge
coupled to the substrate and a frame coupled to the hinge, the
frame and hinge to removably mechanically couple the unitary memory
module to the substrate.
Example 24
[0094] The system of example 23, the frame and hinge to removably
mechanically couple the unitary memory module and the unitary
memory module heat sink to the substrate.
Example 25
[0095] The system of example 17, the sled comprising: a second
socket to receive a processor component, the second socket disposed
on the first surface of the substrate; and a second memory socket
to receive a unitary memory module, the second memory socket
disposed on the second surface of the substrate, the second memory
socket to couple the unitary memory module of the second memory
socket to the processor component of the second socket.
Example 26
[0096] The system of any one of examples 17 to 25, the first memory
socket comprising a ball grid array (BGA) socket.
Example 27
[0097] The system of any one of examples 17 to 25, the first
surface and the second surface opposite from each other.
Example 28
[0098] The system of example 18, the processor component comprising
between 2 and 32 cores.
Example 29
[0099] The system of example 28, the quantity of memory comprising
between 1 and 4 gigabytes of memory per core.
Example 30
[0100] The system of any one of examples 17 to 25, the memory
module comprising dynamic random access memory (DRAM) or
three-dimensional (3D) cross-point memory.
Example 31
[0101] An apparatus for a physical resource sled in a data center,
comprising: a substrate mountable within a sled space of a rack of
a data center; a plurality of sockets coupled to the substrate,
each of the plurality of sockets to receive a processor component;
and a memory module for each of the plurality of sockets, the
memory module communicatively coupled to a respective socket to
couple the memory module to a processor component received by the
socket, each of the memory modules comprising: a quantity of memory
based in part on a number of cores of the processor component; and
a memory controller to couple the quantity of memory to processor
component.
Example 32
[0102] The apparatus of example 31, wherein the plurality of memory
modules comprising a unitary memory module.
Example 33
[0103] The apparatus of example 32, comprising the plurality of
processor components.
Example 34
[0104] The apparatus of example 33, comprising a plurality of
processor component heat sinks mechanically coupled to the
substrate, each of the plurality of processor component heat sinks
thermally coupled to a respective one of the plurality of processor
components.
Example 35
[0105] The apparatus of example 34, comprising a plurality of
unitary memory module heat sinks mechanically coupled to the
substrate, each of the plurality of unitary memory module heat
sinks thermally coupled to a respective one of the plurality of
unitary memory modules.
Example 36
[0106] The apparatus of example 35, comprising the plurality of
unitary memory module heat sinks removably mechanically coupled to
the substrate.
Example 37
[0107] The apparatus of example 36, comprising a hinge coupled to
the substrate and a frame coupled to the hinge, the frame and hinge
to removably mechanically couple the plurality of unitary memory
modules to the substrate.
Example 38
[0108] The apparatus of example 37, the frame and hinge to
removably mechanically couple the plurality of unitary memory
modules and the plurality of unitary memory module heat sinks to
the substrate.
Example 39
[0109] The apparatus of any one of examples 32 to 38, the plurality
of sockets disposed on a first surface of the substrate and the
plurality of unitary memory modules disposed on a second surface of
the substrate.
Example 40
[0110] The apparatus of example 39, the first surface opposite from
the second surface.
Example 41
[0111] The apparatus of any one of examples 32 to 38, each of the
plurality of processor components comprising between 2 and 32
cores.
Example 42
[0112] The apparatus of example 41, the quantity of memory
comprising between 1 and 4 gigabytes of memory per core.
Example 43
[0113] The apparatus of any one of examples 32 to 38, the memory
comprising dynamic random access memory (DRAM) or three-dimensional
(3D) cross-point memory.
Example 44
[0114] A method for a sled of a rack of a data center, the method
comprising: receiving a processor component at a first socket, the
first socket disposed on a first surface of a substrate of a sled;
receiving a memory module at a first memory socket, the first
memory socket disposed on a second surface of the substrate
different than the first surface; and coupling, via the first
socket and the first memory socket, the memory module to the
processor component.
Example 45
[0115] The method of example 44, comprising receiving a unitary
memory module at the first memory socket.
Example 46
[0116] The method of example 45, the unitary memory module
comprising a quantity of memory based in part on a number of cores
of the processor component.
Example 47
[0117] The method of example 45, the sled comprising a processor
component heat sink mechanically coupled to the substrate and
thermally coupled to the processor component.
Example 48
[0118] The method of example 47, the sled comprising a unitary
memory module heat sink mechanically coupled to the substrate and
thermally coupled to the unitary memory module.
Example 49
[0119] The method of example 48, comprising removing the unitary
memory module heat sink from the substrate.
Example 50
[0120] The method of example 48, the sled comprising a hinge
coupled to the substrate and a frame coupled to the hinge, the
frame and hinge to removably mechanically couple the unitary memory
module to the substrate.
Example 51
[0121] The method of example 50, the frame and hinge to removably
mechanically couple the unitary memory module and the unitary
memory module heat sink to the substrate.
Example 52
[0122] The method of example 50, comprising: receiving a processor
component at a second socket, the second socket disposed on the
first surface of the substrate of the sled; receiving a memory
module at a second memory socket, the second memory socket disposed
on the second surface of the substrate; and coupling, via the
second socket and the second memory socket, the memory module to
the processor component.
Example 53
[0123] The method of any one of examples 45 to 52, the first memory
socket comprising a ball grid array (BGA) socket.
Example 54
[0124] The method of any one of examples 45 to 52, the first
surface and the second surface opposite from each other.
Example 55
[0125] The method of example 44, the processor component comprising
between 2 and 32 cores.
Example 56
[0126] The method of example 55, the quantity of memory comprising
between 1 and 4 gigabytes of memory per core.
Example 57
[0127] The method of any one of examples 45 to 52, the memory
module comprising dynamic random access memory (DRAM) or
three-dimensional (3D) cross-point memory.
* * * * *