U.S. patent application number 13/900241 was filed with the patent office on 2013-11-28 for offloading of computation for rack level servers and corresponding methods and systems.
The applicant listed for this patent is Xockets IP, LLC. Invention is credited to Parin Bhadrik Dalal.
Application Number | 20130318276 13/900241 |
Document ID | / |
Family ID | 49622398 |
Filed Date | 2013-11-28 |
United States Patent
Application |
20130318276 |
Kind Code |
A1 |
Dalal; Parin Bhadrik |
November 28, 2013 |
OFFLOADING OF COMPUTATION FOR RACK LEVEL SERVERS AND CORRESPONDING
METHODS AND SYSTEMS
Abstract
A system is disclosed that can include at least one processor
module connectable to a memory bus. The processor module can
include at least one memory, at least one offload processor mounted
on the processor module, and configured to execute operations on
data received over the memory bus, and to output context data to
the memory and read context data from the memory, and a hardware
scheduling logic mounted on the module and configured to control
operations of the at least one processor.
Inventors: |
Dalal; Parin Bhadrik;
(Milpitas, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Xockets IP, LLC |
Wilmington |
DE |
US |
|
|
Family ID: |
49622398 |
Appl. No.: |
13/900241 |
Filed: |
May 22, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61650373 |
May 22, 2012 |
|
|
|
Current U.S.
Class: |
710/308 |
Current CPC
Class: |
G06F 16/30 20190101;
Y02D 10/151 20180101; G06F 16/245 20190101; H04L 63/0227 20130101;
H04L 63/0428 20130101; Y02D 10/36 20180101; G06F 21/55 20130101;
H04L 63/168 20130101; Y02D 10/14 20180101; Y02D 10/00 20180101;
H04L 41/12 20130101; H04L 63/0281 20130101; H04L 67/10 20130101;
G06F 13/16 20130101; G06F 12/1018 20130101; H04L 49/70 20130101;
G06F 9/5066 20130101; G06F 12/1081 20130101; H04L 63/14 20130101;
H04L 63/0272 20130101; Y02D 10/13 20180101; Y02D 10/22 20180101;
G06F 13/4282 20130101; G06F 13/1652 20130101 |
Class at
Publication: |
710/308 |
International
Class: |
G06F 13/16 20060101
G06F013/16 |
Claims
1. A system, comprising: at least one processor module connectable
to a memory bus, the processor module including; at least one
memory, at least one offload processor mounted on the processor
module, and configured to execute operations on data received over
the memory bus, and to output context data to the memory and read
context data from the memory, and a hardware scheduling logic
mounted on the module and configured to control operations of the
at least one processor.
2. The system of claim 1, further including: an in-line module
connector configured to physically connect the processor module to
at least one in-line memory slot of a memory bus, with the in-line
module connector being compatible with a dual-in-line-memory module
(DIMM) connector.
3. The system of claim 1, wherein: the memory bus comprises a
double-data-rate (DDR) memory bus.
4. The system module of claim 1, further including: at least one
host processor coupled to the memory bus; and the at least one
processor module further includes an interface configured to
receive write data and provide read data over the memory bus in
response to a bus controller.
5. The system of claim 4, further including: the interface
comprises a direct memory access (DMA) slave.
6. The system of claim 4, further including: an input/output device
coupled to the at least one host processor via a host bus different
than the memory bus.
7. The system of claim 6, wherein: the input/output device is
configured to transfer network packet data over the memory bus, the
network packet data including a payload and a header portion.
8. The system of claim 1, further including: an input/output memory
management unit configured to translate logical read and write
destinations to physical read and write destinations, the physical
read and write destinations including locations of the at least one
processor module.
9. The system of claim 1, further including: at least one memory
module that includes an in-module connector configured to
physically connect the memory module to one slot of the system
memory bus, and a plurality of memory integrated circuit devices
configured to provide a physical memory addresses accessible via
the system memory bus, including a ping-pong cache.
Description
PRIORITY CLAIMS
[0001] This application claims the benefit of U.S. Provisional
Patent Application 61/650,373 filed May 22, 2012, the contents of
which are incorporated by reference herein.
TECHNICAL FIELD
[0002] The present disclosure relates generally to servers, and
more particularly to offload or auxiliary processing modules that
can be physically connected to a system memory bus to process data
independent of a host processor of the server.
BACKGROUND
[0003] Networked applications often run on dedicated servers that
support an associated "state" for context or session-defined
application. Servers can run multiple applications, each associated
with a specific state running on the server. Common server
applications include an Apache web server, a MySQL database
application, PHP hypertext preprocessing, video or audio processing
with Kaltura supported software, packet filters, application cache,
management and application switches, accounting, analytics, and
logging.
[0004] Unfortunately, servers can be limited by computational and
memory storage costs associated with switching between
applications. When multiple applications are constantly required to
be available, the overhead associated with storing the session
state of each application can result in poor performance due to
constant switching between applications. Dividing applications
between multiple processor cores can help alleviate the application
switching problem, but does not eliminate it, since even advanced
processors often only have eight to sixteen cores, while hundreds
of application or session states may be required.
SUMMARY
[0005] A system can include at least one processor module
connectable to a memory bus, the processor module including; at
least one memory, at least one offload processor mounted on the
processor module, and configured to execute operations on data
received over the memory bus, and to output context data to the
memory and read context data from the memory, and a hardware
scheduling logic mounted on the module and configured to control
operations of the at least one processor
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] FIG. 1 shows illustrates an embodiment with a group of web
servers that are partitioned across a group of brawny processor
core(s) and a set of wimpy cores housed in a rack server.
[0007] FIG. 2 shows an embodiment with an assembly that is
favorably suited for handling real time traffic such as video
streaming.
[0008] FIG. 3 shows illustrates an embodiment with a proxy
server--web server assembly that is partitioned across a group of
brawny processor core(s) (housed in a traditional server module)
and a set of wimpy cores housed in a rack server module.
[0009] FIG. 4-1 shows a cartoon schematically illustrating a data
processing system according to an embodiment, including a removable
computation module for offload of data processing.
[0010] FIG. 4-2 shows an example layout of an in-line module
(referred to as a "XIMM") module according to an embodiment.
[0011] FIG. 4-3 shows two possible architectures for a data
processing system including X86 main processors and XIMMs (Xockets
MAX and MIN).
[0012] FIG. 4-4 shows a representative the power budget for XIMMs
according to various embodiments.
[0013] FIG. 4-5 illustrates data flow operation of one embodiment
of a XIMM using an ARM A9 architecture.
DETAILED DESCRIPTION
[0014] Networked applications are available that run on servers and
have associated with them a state (session-defined applications).
The session nature of such applications allows them to have an
associated state and a context when the session is running on the
server. Further, if such session-limited applications are
computationally lightweight, they can be run in part or fully on
the auxiliary or additional processor cores (such as those based on
the ARM architecture, as but one particular example) which are
mounted on modules connected to a memory bus, for example, by
insertion into a socket for a Dual In-line Memory Module (DIMM).
Such modules can be referred to as a Xocket .TM.In-line Memory
Module (XIMM), and have multiple cores (e.g., ARM cores) associated
with a memory channel. A XIMM can access the network data through
an intermediary virtual switch (such as OpenFlow or similar) that
can identify sessions and direct the network data to the
corresponding module (XIMM) mounted cores, where the session flow
for the incoming network data can be handled.
[0015] As will be appreciated, through usage of a large prefetch
buffer or low latency memory, the session context of each of the
sessions that are run on the processor cores of a XIMM can be
stored external to the cache of such processor cores. By
systematically engineering the transfer of cache context to a
memory external to the module processors (e.g., RAMs) and
engineering low latency context switch, it is possible to execute
several high-bandwidth server applications on a XIMM provided the
applications are not computationally intensive. The "wimpy"
processor cores of a XIMM can be favorably disposed to handle high
network bandwidth traffic at a lower latency and at a very low
power when compared to traditional high power `brawny` cores.
[0016] In effect, one can reduce problems associated with session
limited servers by using the module processor (e.g., an ARM
architecture processor) of a XIMM to offload part of the
functionality of traditional servers. Module processor cores may be
suited to carry computationally simple or lightweight applications
such as packet filtering or packet logging functions. They may also
be suited for providing the function of an application cache for
handling hot-code that is to be serviced very frequently to
incoming streams. Module processor cores can also be suited for
functions such as video streaming/ real time streaming, that often
only require light-weight processing.
[0017] As an example of partitioning applications between a XIMM
with "wimpy" ARM cores and a conventional "brawny" core (e.g., x86
or Itanium server processor with Intel multicore processor), a
computationally lightweight Apache web server can be hosted on one
or more XIMMs with ARM cores, while computationally heavy MySQL and
PHP are hosted on x86 brawny cores. Similarly, lightweight
applications such as a packet filter, application cache, management
and application switch are hosted on XIMM(s), while x86 cores host
control, accounting, analytics and logging.
[0018] FIG. 1 illustrates an embodiment with a group of distributed
web servers that are partitioned across a group of brawny processor
core(s) 108 connected by bus 106 to switch 104 (which may be an
OpenFlow or other virtual switch) and a set of wimpy XIMM mounted
cores (112a to 112c), all being housed in a rack server module 140.
In some embodiments, a rack server module 140 further includes a
switch (100), which can be a network interface card with single
root IO virtualization that provides input-out memory management
unit (IOMMU) functions 102. A second virtual switch (104) running,
for example, an open source software stack including OpenFlow can
redirect packets to XIMM mounted cores (112a to 112c).
[0019] According to some embodiments, a web server running
Apache-MySQL-PHP (AMP) can be used to service clients that send
requests to the server module 140 from network 120. The embodiment
of FIG. 1 can split a traditional server module running AMP across
a combination of processors cores, which act as separate processing
entities. Each of the wimpy processor cores (112a to 112c) (which
can be low power ARM cores in particular embodiments) can be
mounted on an XIMM, with each core being allocated a memory channel
(110a, 110b, 110c). At least of one of the wimpy processor cores
(112a to 112c) can be capable of running a computationally light
weight Apache or similar web server code for servicing client
requests which are in the form of HTTP or a similar application
level protocol. The Apache server code can be replicated for a
plurality of clients to service a huge number of requests. The
wimpy cores (112a to 112c) can be ideally suited for running such
Apache code and responding to multiple client requests at a low
latency. For static data that is available locally, wimpy cores
(112a to 112c) can lookup such data from their local cache or a low
latency memory associated with them. In case the queried data is
not available locally, the wimpy cores (112a to 112c) can request a
direct memory access (DMA) (memory-to-memory or disk-to-memory)
transfer to acquire such data.
[0020] The computation and dynamic behavior associated with the web
pages can be rendered by PHP or such other server side scripts
running on the brawny cores 108. The brawny cores might also have
code/scripting libraries for interacting with MySQL databases
stored in hard disks present in said server module 140. The wimpy
cores (112a to 112c), on receiving queries or user requests from
clients, transfer embedded PHP/MySQL queries to said brawny cores
over a connection (e.g., an Ethernet-type connection) that is
tunneled on a memory bus such as a DDR bus. The PHP interpreter on
brawny cores 108 interfaces and queries a MySQL database and
processes the queries before transferring the results to the wimpy
cores (112a to 112c) over said connection. The wimpy cores (112a to
112c) can then service the results obtained to the end user or
client.
[0021] Given that the server code lacking server side script is
computationally light weight, and many Web API types are
Representational State Transfer (REST) based and require only HTML
processing, and on most occasions require no persistent state,
wimpy cores (112a-112c) can be highly suited to execute such light
weight functions. When scripts and computation is required, the
computation is handled favorably by brawny cores 108 before the
results are serviced to end users. The ability to service low
computation user queries with a low latency, and the ability to
introduce dynamicity into the web page by supporting server-side
scripting make the combination of wimpy and brawny cores an ideal
fit for traditional web server functions. In the enterprise and
private datacenter, simple object access protocol (SOAP) is often
used, making the ability to context switch with sessions
performance critical, and the ability of wimpy cores to save the
context in an extended cache can enhance performance
significantly.
[0022] FIG. 2 illustrates an embodiment with an assembly that is
favorably suited for handling real time traffic such as video
streaming. The assembly comprises of a group of web servers that
are partitioned across a group of brawny processor core(s) 208 and
a set of wimpy cores (212a to 212c) housed in a rack server module
240. The embodiment of FIG. 2 splits a traditional server module
capable of handling real time traffic across a combination of
processors cores, which act as separate processing entities. In
some embodiments, a rack server module 240 further includes a
switch (100), which can provide input-out memory management unit
(IOMMU) functions 102.
[0023] Each of the wimpy processor cores (e.g., ARM cores) (212a to
212c) can be mounted on an in-memory module (not shown) and each of
them can be allocated a memory channel (210a to 210c). At least one
of the wimpy processor cores (212a to 212c) can be capable of
running a tight, computationally light weight web server code for
servicing applications that need to be transmitted with a very low
latency/jitter. Example applications such as video, audio, or voice
over IP (VoIP) streaming involve client requests that need to be
handled with as little latency as possible. One particular protocol
suitable for the disclosed embodiment is Real-Time Transport
Protocol (RTP), an Internet protocol for transmitting real-time
data such as audio and video. RTP itself does not guarantee
real-time delivery of data, but it does provide mechanisms for the
sending and receiving applications to support streaming data.
[0024] Brawny processor core(s) 208 can be connected by bus 206 to
switch 204 (which may be an OpenFlow or other virtual switch). In
one embodiment, such a bus 206 can be a front side bus.
[0025] In operation, server module 240 can handle several client
requests and services information in real time. The stateful nature
of applications such as RTP/video streaming makes the embodiment
amenable to handle several queries at a very high throughput. The
embodiment can have an engineered low latency context overhead
system that enables wimpy cores (212a to 212c) to shift from
servicing one session to another session in real time. Such a
context switch system can enable it to meet the quality of service
(QoS) and jitter requirements of RTP and video traffic. This can
provide substantial performance improvement if the overlay control
plane and data plane (for handling real time applications related
traffic) is split across a brawny processor 208 and a number of
wimpy cores (212a to 212c). The wimpy cores (212a to 212c) can be
favorably suited to handling the data plane and servicing the
actual streaming of data in video/audio streaming or RTP
applications. The ability of wimpy cores (212a to 212c) to switch
between multiple sessions with low latency makes them suitable for
handling of the data plane.
[0026] For example, wimpy cores (212a to 212c) can run code that
quickly constructs data that is in an RTP format by concatenating
data (that is available locally or through direct memory access
(DMA) from main memory or a hard disk) with sequence number,
synchronization data, timestamp etc., and sends it over to clients
according to a predetermined protocol. The wimpy cores (212a to
212c) can be capable of switching to a new session/new client with
a very low latency and performing a RTP data transport for the new
session. The brawny cores 208 can be favorably suited for overlay
control plane functionality.
[0027] The overlay control plane can often involve computationally
expensive actions such as setting up a session, monitoring session
statistics, and providing information on QoS and feedback to
session participants. The overlay control plane and the data plane
can communicate over a connection (e.g., an Ethernet-type
connection) that is tunneled on a memory bus such as a DDR bus.
Typically, overlay control can establish sessions for features such
as audio/videoconferencing, interactive gaming, and call forwarding
to be deployed over IP networks, including traditional telephony
features such as personal mobility, time-of-day routing and call
forwarding based on the geographical location of the person being
called. For example, the overlay control plane can be responsible
for executing RTP control protocol (RTCP, which forms part of the
RTP protocol used to carry VoIP communications and monitors QoS);
Session Initiation Protocol (SIP, which is an application-layer
control signaling protocol for Internet Telephony); Session
Description Protocol (SDP, which is a protocol that defines a
text-based format for describing streaming media sessions and
multicast transmissions); or other low latency data streaming
protocols.
[0028] FIG. 3 illustrates an embodiment with a proxy server--web
server assembly that is partitioned across a group of brawny
processor core(s) 328 (housed in a traditional server module 360)
and a set of wimpy cores (312a to 312c) housed in a rack server
module 340. The embodiment can include a proxy server module 340
that can handle content that is frequently accessed. A switch/load
balancer apparatus 320 can direct all incoming queries to the proxy
server module 340. The proxy server module 340 can look up its
local memory for frequently accessed data and responds to the query
with a response if such data is available. The proxy server module
340 can also store server side code that is frequently accessed and
can act as a processing resource for executing the hot code. For
queries that are not part of the rack hot code, the wimpy cores
(312a to 312c) can redirect the traffic to brawny cores (308, 328)
for processing and response.
[0029] In particular embodiments, in some embodiments, a rack
server module 240 further includes a switch (100), which can
provide input-out memory management unit (IOMMU) functions 302 and
a switch 304 (which may be an OpenFlow or other virtual switch).
Brawny processor core(s) 308 can be connected to switch 304 by bus
306, which can be a front side bus. A traditional server module 360
can also include a switch 324 can provide IOMMU functions 326.
[0030] The following example(s) provide illustration and discussion
of exemplary hardware and data processing systems suitable for
implementation and operation of the foregoing discussed systems and
methods. In particular hardware and operation of wimpy cores or
computational elements connected to a memory bus and mounted in
DIMM or other conventional memory socket is discussed.
[0031] FIG. 4-1 is a cartoon schematically illustrating a data
processing system 400 including a removable computation module 402
for offload of data processing from x86 or similar main/server
processors 403 to modules connected to a memory bus 403.
[0032] Such modules 402 can be XIMM modules, as described herein or
equivalents, and can have multiple computation elements that can be
referred to as "offload processors" because they offload various
"light touch" processing tasks such HTML, video, packet level
services, security, or data analytics. This is of particular
advantage for applications that require frequent random access or
application context switching, since many server processors incur
significant power usage or have data throughput limitations that
can be greatly reduced by transfer of the computation to lower
power and more memory efficient offload processors.
[0033] The computation elements or offload processors can be
accessible through memory bus 405. In this embodiment, the module
can be inserted into a Dual Inline Memory Module (DIMM) slot on a
commodity computer or server using a DIMM connector (407),
providing a significant increase in effective computing power to
system 400. The module (e.g., XIMM) may communicate with other
components in the commodity computer or server via one of a variety
of busses including but not limited to any version of existing
double data rate standards (e.g., DDR, DDR2, DDR3, etc.)
[0034] This illustrated embodiment of the module 402 contains five
offload processors (400a, 400b, 400c, 400d, 400e) however other
embodiments containing greater or fewer numbers of processors are
contemplated. The offload processors (400a to 400e) can be custom
manufactured or one of a variety of commodity processors including
but not limited to field-programmable grid arrays (FPGA),
microprocessors, reduced instruction set computers (RISC),
microcontrollers or ARM processors. The computation elements or
offload processors can include combinations of computational FPGAs
such as those based on Altera, Xilinx (e.g., Artix.TM. class or
Zynq.RTM. architecture, e.g., Zynq.RTM. 7020), and/or conventional
processors such as those based on Intel Atom or ARM architecture
(e.g., ARM A9). For many applications, ARM processors having
advanced memory handling features such as a snoop control unit
(SCU) are preferred, since this can allow coherent read and write
of memory. Other preferred advanced memory features can include
processors that support an accelerator coherency port (ACP) that
can allow for coherent supplementation of the cache through an FPGA
fabric or computational element.
[0035] Each offload processor (400a to 400e) on the module 402 may
run one of a variety of operating systems including but not limited
to Apache or Linux. In addition, the offload processors (400a to
400e) may have access to a plurality of dedicated or shared storage
methods. In this embodiment, each offload processor can connect to
one or more storage units (in this embodiments, pairs of storage
units 404a, 404b, 404c and 404d). Storage units (404a to 404d) can
be of a variety of storage types, including but not limited to
random access memory (RAM), dynamic random access memory (DRAM),
sequential access memory (SAM), static random access memory (SRAM),
synchronous dynamic random access memory (SDRAM), reduced latency
dynamic random access memory (RLDRAM), flash memory, or other
emerging memory standards such as those based on DDR4 or hybrid
memory cubes (HMC).
[0036] FIG. 4-2 shows an example layout of a module (e.g., XIMM)
such as that described in FIG. 4-1, as well as a connectivity
diagram between the components of the module. In this example, five
Xilinx.TM. Zynq.RTM. 7020 (416a, 416b, 416c, 416d, 416e and 416 in
the connectivity diagram) programmable systems-on-a-chip (SoC) are
used as computational FPGAs/offload processors. These offload
processors can communicate with each other using memory-mapped
input-output (MMIO) (412). The types of storage units used in this
example are SDRAM (SD, one shown as 408) and RLDRAM (RLD, three
shown as 406a, 406b, 406c) and an Inphi.TM. iMB02 memory buffer
418. Down conversion of 3.3 V to 2.5 volt is required to connect
the RLDRAM (406a to 406c) with the Zynq.RTM. components. The
components are connected to the offload processors and to each
other via a DDR3 (414) memory bus. Advantageously, the indicated
layout maximizes memory resources availability without requiring a
violation of the number of pins available under the DIMM
standard.
[0037] In this embodiment, one of the Zynq.RTM. computational FPGAs
(416a to 416e) can act as arbiter providing a memory cache, giving
an ability to have peer to peer sharing of data (via memcached or
OMQ memory formalisms) between the other Zynq.RTM. computational
FPGAs (416a to 416e). Traffic departing for the computational FPGAs
can be controlled through memory mapped I/O. The arbiter queues
session data for use, and when a computational FPGA asks for
address outside of the provided session, the arbiter can be the
first level of retrieval, external processing determination, and
predictors set.
[0038] FIG. 4-3 shows two possible architectures for a module
(e.g., XIMM) in a simulation (Xockets MAX and MIN). Xockets MIN
(420a) can be used in low-end public cloud servers, containing
twenty ARM cores (420b) spread across fourteen DIMM slots in a
commodity server which has two Opteron x86 processors and two
network interface cards (NICs) (420c). This architecture can
provide a minimal benefit per Watt of power used. Xockets MAX
(422a) contains eighty ARM cores (422b) across eight DIMM slots, in
a server with two Opteron x86 processors and four NICs (422c). This
architecture can provide a maximum benefit per Watt of power
used.
[0039] FIG. 4-4 shows a representative power budget for an example
of a module (e.g., XIMM) according to a particular embodiment. Each
component is listed (424a, 424b, 424c, 424d) along with its power
profile. Average total and total wattages are also listed (426a,
426b). In total, especially for I/O packet processing with packet
sizes on the order 1 KB in size, module can have a low average
power budget that is easily able to be provided by the 22 V.sub.dd
pins per DIMM. Additionally, the expected thermal output can be
handled by inexpensive conductive heat spreaders, without requiring
additional convective, conductive, or thermoelectric cooling. In
certain situations, digital thermometers can be implemented to
dynamically reduce performance (and consequent heat generation) if
needed.
[0040] Operation of one embodiment of a module 430 (e.g., XIMM)
using an ARM A9 architecture is illustrated with respect to FIG.
4-5. Use of ARM A9 architecture in conjunction with an FPGA fabric
and memory, in this case shown as reduced latency DRAM (RLDRAM)
438, can simplify or makes possible zero-overhead context
switching, memory compression and CPI, in part by allowing hardware
context switching synchronized with network queuing. In this way,
there can be a one-to-one mapping between thread and queues. As
illustrated, the ARM A9 architecture includes a Snoop Control Unit
432 (SCU). This unit allows one to read out and write in memory
coherently. Additionally, the Accelerator Coherency Port 434 (ACP)
allows for coherent supplementation of the cache throughout the
FPGA 436. The RLDRAM 438 provides the auxiliary bandwidth to read
and write the ping-pong cache supplement (435): Block1$ and Block2$
during packet-level meta-data processing.
[0041] The following table (Table 1) illustrates potential states
that can exist in the scheduling of queues/threads to XIMM
processors and memory such as illustrated in FIG. 4-5.
TABLE-US-00001 TABLE 1 Queue/Thread State HW treatment Waiting for
Ingress All ingress data has been processed and thread Packet
awaits further communication. Waiting for MMIO A functional call to
MM hardware (such as HW encryption or transcoding) was made.
Waiting for Rate-limit The thread's resource consumption exceeds
limit, due to other connections idling. Currently being One of the
ARM cores is already processing this processed thread, cannot
schedule again. Ready for Selection The thread is ready for context
selection.
These states can help coordinate the complex synchronization
between processes, network traffic, and memory-mapped hardware.
When a queue is selected by a traffic manager a pipeline
coordinates swapping in the desired L2 cache (440), transferring
the reassembled IO data into the memory space of the executing
process. In certain cases, no packets are pending in the queue, but
computation is still pending to service previous packets. Once this
process makes a memory reference outside of the data swapped, a
scheduler can require queued data from a network interface card
(NIC) to continue scheduling the thread. To provide fair queuing to
a process not having data, the maximum context size is assumed as
data processed. In this way, a queue must be provisioned as the
greater of computational resource and network bandwidth resource,
for example, each as a ratio of an 800 MHz A9 and 3 Gbps of
bandwidth. Given the lopsidedness of this ratio, the ARM core is
generally indicated to be worthwhile for computation having many
parallel sessions (such that the hardware's prefetching of
session-specific data and TCP/reassembly offloads a large portion
of the CPU load) and those requiring minimal general purpose
processing of data.
[0042] Essentially zero-overhead context switching is also possible
using modules as disclosed in FIG. 4-5. Because per packet
processing has minimum state associated with it, and represents
inherent engineered parallelism, minimal memory access is needed,
aside from packet buffering. On the other hand, after packet
reconstruction, the entire memory state of the session can be
accessed, and so can require maximal memory utility. By using the
time of packet-level processing to prefetch the next hardware
scheduled application-level service context in two different
processing passes, the memory can always be available for
prefetching. Additionally, the FPGA 436 can hold a supplemental
"ping-pong" cache (435) that is read and written with every context
switch, while the other is in use. As previously noted, this is
enabled in part by the SCU 432, which allows one to read out and
write in memory coherently, and ACP 434 for coherent
supplementation of the cache throughout the FPGA 436. The RLDRAM
438 provides for read and write to the ping-pong cache supplement
435 (show as Block1$ and Block2$) during packet-level meta-data
processing. In the embodiment shown, only locally terminating
queues can prompt context switching.
[0043] In operation, metadata transport code can relieve a main or
host processor from tasks including fragmentation and reassembly,
and checksum and other metadata services (e.g., accounting, IPSec,
SSL, Overlay, etc.). As IO data streams in and out, L1 cache 437
can be filled during packet processing. During a context switch,
the lock-down portion of a translation lookaside buffer (TLB) of an
L1 cache can be rewritten with the addresses corresponding to the
new context. In one very particular implementation, the following
four commands can be executed for the current memory space. [0044]
MRC p15,0,r0,c10,c0,0; read the lockdown register [0045] BIC
r0,r0,#1; clear preserve bit [0046] MCR p15,0,r0,c10,c0,0; write to
the lockdown register; [0047] write to the old value to the memory
mapped Block RAM This is a small 32 cycle overhead to bear. Other
TLB entries can be used by the XIMM stochastically.
[0048] Bandwidths and capacities of the memories can be precisely
allocated to support context switching as well as applications such
as Openflow processing, billing, accounting, and header filtering
programs.
[0049] For additional performance improvements, the ACP 434 can be
used not just for cache supplementation, but hardware functionality
supplementation, in part by exploitation of the memory space
allocation. An operand can be written to memory and the new
function called, through customizing specific Open Source
libraries, so putting the thread to sleep and a hardware scheduler
can validate it for scheduling again once the results are ready.
For example, OpenVPN uses the OpenSSL library, where the
encrypt/decrypt functions 439 can be memory mapped. Large blocks
are then available to be exported without delay, or consuming the
L2 cache 440, using the ACP 434. Hence, a minimum number of calls
are needed within the processing window of a context switch,
improving overall performance.
[0050] It should be appreciated that in the foregoing description
of exemplary embodiments of the invention, various features of the
invention are sometimes grouped together in a single embodiment,
figure, or description thereof for the purpose of streamlining the
disclosure aiding in the understanding of one or more of the
various inventive aspects. This method of disclosure, however, is
not to be interpreted as reflecting an intention that the claimed
invention requires more features than are expressly recited in each
claim. Rather, as the following claims reflect, inventive aspects
lie in less than all features of a single foregoing disclosed
embodiment. Thus, the claims following the detailed description are
hereby expressly incorporated into this detailed description, with
each claim standing on its own as a separate embodiment of this
invention.
[0051] It is also understood that the embodiments of the invention
may be practiced in the absence of an element and/or step not
specifically disclosed. That is, an inventive feature of the
invention may be elimination of an element.
[0052] Accordingly, while the various aspects of the particular
embodiments set forth herein have been described in detail, the
present invention could be subject to various changes,
substitutions, and alterations without departing from the spirit
and scope of the invention.
* * * * *