U.S. patent application number 11/489393 was filed with the patent office on 2007-01-18 for method and system for transparent tcp offload with per flow estimation of a far end transmit window.
Invention is credited to Eliezer Aloni, Guy Corem, Aviv Greenberg, Assaf Grunfeld, Ori Hanegbi, Dov Hirshfeld, Shay Mizrachi, Rafi Shalom, Eliezer Tamir.
Application Number | 20070014246 11/489393 |
Document ID | / |
Family ID | 38163302 |
Filed Date | 2007-01-18 |
United States Patent
Application |
20070014246 |
Kind Code |
A1 |
Aloni; Eliezer ; et
al. |
January 18, 2007 |
Method and system for transparent TCP offload with per flow
estimation of a far end transmit window
Abstract
Certain aspects of a method and system for transparent
transmission control protocol (TCP) offload with per flow
estimation of far end transmit window are disclosed. Aspects of a
method may include storing at a network interface card (NIC)
processor state information for a received TCP segment and state
information for transmitted TCP segments for a determined network
flow without transferring state information for the received TCP
segment to a host system communicatively coupled to the NIC. The
generation of a new TCP segment comprising the collected received
TCP segments may be controlled based on the occurrence of a
termination event and a transmit window size. The period of time
for aggregation of received TCP segments may be calculated based on
the sequence numbers of the next expected TCP segment and the next
received acknowledgement packet.
Inventors: |
Aloni; Eliezer; (Zur Yigal,
IL) ; Shalom; Rafi; (Givat Shmuel, IL) ;
Mizrachi; Shay; (Hod HaSharon, IL) ; Hirshfeld;
Dov; (Givat Shmuel, IL) ; Greenberg; Aviv;
(Netanya, IL) ; Grunfeld; Assaf; (Hod Hasharon,
IL) ; Tamir; Eliezer; (Beit Shemesh, IL) ;
Corem; Guy; (Herzlia, IL) ; Hanegbi; Ori;
(Herzlia, IL) |
Correspondence
Address: |
MCANDREWS HELD & MALLOY, LTD
500 WEST MADISON STREET
SUITE 3400
CHICAGO
IL
60661
US
|
Family ID: |
38163302 |
Appl. No.: |
11/489393 |
Filed: |
July 18, 2006 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60700544 |
Jul 18, 2005 |
|
|
|
Current U.S.
Class: |
370/252 ;
370/254 |
Current CPC
Class: |
H04L 47/27 20130101;
H04L 47/2441 20130101; H04L 69/166 20130101; Y02D 50/30 20180101;
H04L 47/193 20130101; H04L 49/90 20130101; H04L 47/36 20130101;
H04L 49/9094 20130101; H04L 69/161 20130101; Y02D 10/14 20180101;
H04L 47/10 20130101; H04L 69/163 20130101; H04L 69/16 20130101;
H04L 69/12 20130101; Y02D 10/00 20180101; H04L 47/41 20130101; Y02D
30/50 20200801; G06F 13/128 20130101; H04L 49/9063 20130101 |
Class at
Publication: |
370/252 ;
370/254 |
International
Class: |
H04J 1/16 20060101
H04J001/16; H04L 12/28 20060101 H04L012/28 |
Claims
1. A method for processing network information, the method
comprising: storing at a NIC, state information for a received TCP
segment for a determined network flow and state information for
transmitted TCP segments for said determined network flow, without
transferring state information for said received TCP segment and
said state information for said transmitted TCP segments to a host
system communicatively coupled to said NIC; controlling generation
of a new TCP segment based on occurrence of a termination event and
a transmit window size; and communicating said generated new TCP
segment comprising said collected at least one received TCP
segment, new state information for said new TCP segment, and said
state information for said transmitted TCP segments to said host
system for TCP offload.
2. The method according to claim 1, wherein said generation of said
new TCP segment occurs after a minimum of: a time period for a
termination event to occur and a transmit window size.
3. The method according to claim 2, further comprising calculating
said transmit window size based on a sequence number of a next of
said at least one received TCP segment and a sequence number of a
next received acknowledgement packet from said host system.
4. The method according to claim 2, further comprising classifying
a state of said determined network flow as one of: an in order
state, an out of order state, and an unknown state.
5. The method according to claim 4, further comprising if a
sequence number of at least one acknowledgement (ACK) packet is
greater than an isle length of said collected at least one TCP
segment, modifying said classified state of said determined network
flow from said out of order state to said in order state.
6. The method according to claim 4, further comprising setting said
transmit window size to zero when said classified state of said
determined network flow is said out of order state.
7. The method according to claim 2, wherein said termination event
occurs when at least one of the following occurs: a TCP/Internet
Protocol (TCP/IP) frame associated with said determined network
flow comprises a TCP flag with at least one of: a PSH bit, a FIN
bit, and a RST bit; a TCP/IP frame associated with said determined
network flow comprises a TCP payload length that is equal to or
greater than a maximum IP datagram size; a timer associated with
said collecting of said at least one TCP segment expires; a new
entry in a flow lookup table (FLT) is generated when said FLT is
full; a first IP fragment associated with said determined network
flow is received; a transmit window is modified; a change in a
number of TCP acknowledgments (ACKS) is greater than or equal to an
ACK threshold; a TCP/IP frame associated with said determined
network flow comprises a selective TCP acknowledgment (SACK); and a
TCP/IP frame associated with said determined network flow comprises
a number of duplicated TCP acknowledgments that is equal to or
greater than a duplicated ACK threshold.
8. The method according to claim 1, further comprising initializing
a state of said determined network flow to an unknown state.
9. The method according to claim 8, further comprising updating
said initialized state based on said collected at least one TCP
segment.
10. The method according to claim 1, further comprising generating
said new TCP segment by aggregating a plurality of said at least
one received TCP segment for said determined network flow after
said transmit window size.
11. A system for processing network information, the system
comprising: a NIC processor that enables storage of state
information for a received TCP segment for a determined network
flow and state information for transmitted TCP segments for said
determined network flow, without transferring state information for
said received TCP segment and said state information for said
transmitted TCP segments to a host system communicatively coupled
to said NIC; said NIC processor enables controlling of generation
of a new TCP segment based on occurrence of a termination event and
a transmit window size; and said NIC processor enables
communication of said generated new TCP segment comprising said
collected at least one received TCP segment, new state information
for said new TCP segment, and said state information for said
transmitted TCP segments to said host system for TCP offload.
12. The system according to claim 11, wherein said generation of
said new TCP segment occurs after a minimum of: a time period for a
termination event to occur and a transmit window size.
13. The system according to claim 12, wherein said NIC processor
enables calculation of said transmit window size based on a
sequence number of a next of said at least one received TCP segment
and a sequence number of a next received acknowledgement packet
from said host system.
14. The system according to claim 12, wherein said NIC processor
enables classification of a state of said determined network flow
as one of: an in order state, an out of order state, and an unknown
state.
15. The system according to claim 14, wherein said NIC processor
enables modification of said classified state of said determined
network flow from said out of order state to said in order state,
if a sequence number of at least one acknowledgement (ACK) packet
is greater than an isle length of said collected at least one TCP
segment.
16. The system according to claim 14, wherein said NIC processor
enables setting of said transmit window size to zero when said
classified state of said determined network flow is said out of
order state.
17. The system according to claim 12, wherein said termination
event occurs when at least one of the following occurs: a
TCP/Internet Protocol (TCP/IP) frame associated with said
determined network flow comprises a TCP flag with at least one of:
a PSH bit, a FIN bit, and a RST bit; a TCP/IP frame associated with
said determined network flow comprises a TCP payload length that is
equal to or greater than a maximum IP datagram size; a timer
associated with said collecting of said at least one TCP segment
expires; a new entry in a flow lookup table (FLT) is generated when
said FLT is full; a first IP fragment associated with said
determined network flow is received; a transmit window is modified;
a change in a number of TCP acknowledgments (ACKS) is greater than
or equal to an ACK threshold; a TCP/IP frame associated with said
determined network flow comprises a selective TCP acknowledgment
(SACK); and a TCP/IP frame associated with said determined network
flow comprises a number of duplicated TCP acknowledgments that is
equal to or greater than a duplicated ACK threshold.
18. The system according to claim 11, wherein said NIC processor
enables initialization of a state of said determined network flow
to an unknown state.
19. The system according to claim 18, wherein said NIC processor
enables updating of said initialized state based on said collected
at least one TCP segment.
20. The system according to claim 11, wherein said NIC processor
enables generation of said new TCP segment by aggregating a
plurality of said at least one received TCP segment for said
determined network flow after said transmit window size.
Description
[0001] Each of the above referenced applications is hereby
incorporated herein by reference in their entirety.
FIELD OF THE INVENTION
[0002] Certain embodiments of the invention relate to processing of
TCP data and related TCP information. More specifically, certain
embodiments of the invention relate to a method and system for
transparent TCP offload with per flow estimation of a far end
transmit window.
BACKGROUND OF THE INVENTION
[0003] There are different approaches for reducing the processing
power of TCP/IP stack processing. In a TCP Offload Engine (TOE),
the offloading engine performs all or most of the TCP processing,
presenting to the upper layer a stream of data. There may be
various disadvantages to this approach. The TTOE is tightly coupled
with the operating system and therefore requires solutions that are
dependent on the operating system and may require changes in the
operating system to support it. The TTOE may require a side by side
stack solution, requiring some kind of manual configuration, either
by the application, for example, by explicitly specifying a socket
address family for accelerated connections. The TTOE may also
require some kind of manual configuration by an IT administrator,
for example, by explicitly specifying an IP subnet address for
accelerated connections to select which of the TCP flows will be
offloaded and the offload engine is very complex as it needs to
implement TCP packet processing.
[0004] Large segment offload (LSO)/transmit segment offload (TSO)
may be utilized to reduce the required host processing power by
reducing the transmit packet processing. In this approach the host
sends to the NIC, bigger transmit units than the maximum
transmission unit (MTU) and the NIC cuts them to segments according
to the MTU. Since part of the host processing is linear to the
number of transmitted units, this reduces the required host
processing power. While being efficient in reducing the transmit
packet processing, LSO does not help with receive packet
processing. In addition, for each single large transmit unit sent
by the host, the host would receive from the far end multiple ACKs,
one for each MTU-sized segment. The multiple ACKs require
consumption of scarce and expensive bandwidth, thereby reducing
throughput and efficiency.
[0005] In large receive offload (LRO), a stateless receive offload
mechanism, the TCP flows may be split to multiple hardware queues,
according to a hash function that guarantees that a specific TCP
flow would always be directed into the same hardware queue. For
each hardware queue, the mechanism takes advantage of interrupt
coalescing to scan the queue and aggregate subsequent packets on
the queue belonging to the same TCP flow into a single large
receive unit.
[0006] While this mechanism does not require any additional
hardware from the NIC besides multiple hardware queues, it may have
various performance limitations. For example, if the number of
flows were larger than the number of hardware queues, multiple
flows would fall into the same queue, resulting in no LRO
aggregation for that queue. If the number of flows is larger than
twice the number of hardware queues, no LRO aggregation is
performed on any of the flows. The aggregation may be limited to
the amount of packets available to the host in one interrupt
period. If the interrupt period is short, and the number of flows
is not small, the number of packets that are available to the host
CPU for aggregation on each flow may be small, resulting in limited
or no LRO aggregation, even if the number of hardware queues is
large. The LRO aggregation may be performed on the host CPU,
resulting in additional processing. The driver may deliver to the
TCP stack a linked list of buffers comprising of a header buffer
followed by a series of data buffers, which may require more
processing than in the case where all the data is contiguously
delivered on one buffer.
[0007] Accordingly, the computational power of the offload engine
needs to be very high or at least the system needs a very large
buffer to compensate for any additional delays due to the delayed
processing of the out-of-order segments. When host memory is used
for temporary storage of out-of-order segments, additional system
memory bandwidth may be consumed when the previously out-of-order
segments are copied to respective buffers. The additional copying
provides a challenge for present memory subsystems, and as a
result, these memory subsystems are unable to support high rates
such as 10 Gbps.
[0008] In general, one challenge faced by TCP implementers wishing
to design a flow-through NIC, is that TCP segments may arrive
out-of-order with respect to the order placed in which they were
transmitted. This may prevent or otherwise hinder the immediate
processing of the TCP control data and prevent the placing of the
data in a host buffer. Accordingly, an implementer may be faced
with the option of dropping out-of-order TCP segments or storing
the TCP segments locally on the NIC until all the missing segments
have been received. Once all the TCP segments have been received,
they may be reordered and processed accordingly. In instances where
the TCP segments are dropped or otherwise discarded, the sending
side may have to re-transmit all the dropped TCP segments and in
some instances, may result in about a fifty percent (50%) decrease
in throughput or bandwidth utilization.
[0009] Further limitations and disadvantages of conventional and
traditional approaches will become apparent to one of skill in the
art, through comparison of such systems with some aspects of the
present invention as set forth in the remainder of the present
application with reference to the drawings.
BRIEF SUMMARY OF THE INVENTION
[0010] A method and/or system for transparent TCP offload with per
flow estimation of a far end transmit window, substantially as
shown in and/or described in connection with at least one of the
figures, as set forth more completely in the claims.
[0011] These and other advantages, aspects and novel features of
the present invention, as well as details of an illustrated
embodiment thereof, will be more fully understood from the
following description and drawings.
BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
[0012] FIG. 1A is a block diagram of an exemplary system for
transparent TCP offload with transmit and receive coupling, in
accordance with an embodiment of the invention.
[0013] FIG. 1B is a block diagram of another exemplary system for
transparent TCP offload with transmit and receive coupling, in
accordance with an embodiment of the invention.
[0014] FIG. 1C is an alternative embodiment of an exemplary system
for transparent TCP offload with transmit and receive coupling, in
accordance with an embodiment of the invention.
[0015] FIG. 1D is a block diagram of a system for handling
transparent TCP offload with transmit and receive coupling, in
accordance with an embodiment of the invention.
[0016] FIG. 2A is a diagram illustrating exemplary steps that may
be utilized for handling out-of-order data when a packet P3 and a
packet P4 arrive out-of-order with respect to the order of
transmission, in accordance with an embodiment of the
invention.
[0017] FIG. 2B is a flow chart illustrating exemplary steps for
transparent TCP offload with transmit-receive coupling, in
accordance with an embodiment of the invention.
[0018] FIG. 3 is a flow chart illustrating exemplary steps for
transparent TCP offload with per flow estimation of far end
transmit window, in accordance with an embodiment of the
invention.
DETAILED DESCRIPTION OF THE INVENTION
[0019] Certain embodiments of the invention may be found in a
method and system for transparent TCP offload. Aspects of the
method and system may comprise storing at a network interface card
(NIC) processor state information for a received TCP segment and
state information for transmitted TCP segments for a determined
network flow without transferring state information for the
received TCP segment to a host system communicatively coupled to
the NIC. The generation of a new TCP segment comprising the
collected received TCP segments may be controlled based on the
occurrence of a termination event and a transmit window size. The
period of time for aggregation of received TCP segments may be
calculated based on the sequence numbers of the next expected TCP
segment and the next received acknowledgement packet. The generated
new TCP segment, new state information for the generated new TCP
segment, and the state information for the transmitted TCP segments
may be communicated to the host system for TCP offload.
[0020] Under conventional processing, each of the plurality of TCP
segments received would have to be individually processed by a host
processor in the host system. TCP processing requires extensive CPU
processing power in terms of both protocol processing and data
placement on the receiver side. Current processing systems and
methods involve the transfer of TCP state to a dedicated hardware
such as a NIC, where significant changes to host TCP stack and/or
underlying hardware are required.
[0021] FIG. 1A is a block diagram of an exemplary system for
transparent TCP offload, in accordance with an embodiment of the
invention. Accordingly, the system of FIG. 1A may be adapted to
handle transparent TCP offload of transmission control protocol
(TCP) datagrams or packets. Referring to FIG. 1A, the system may
comprise, for example, a CPU 102, a memory controller 104, a host
memory 106, a host interface 108, network subsystem 110 and an
Ethernet 112. The network subsystem 110 may comprise, for example,
a transparent TCP-enabled Ethernet Controller (TTEEC) or a
transparent TCP offload engine (TTOE) 114. The network subsystem
110 may comprise, for example, a network interface card (NIC). The
host interface 108 may be, for example, a peripheral component
interconnect (PCI), PCI-X, PCI-Express, ISA, SCSI or other type of
bus. The memory controller 106 may be coupled to the CPU 104, to
the memory 106 and to the host interface 108. The host interface
108 may be coupled to the network subsystem 110 via the TTEEC/TTOE
114.
[0022] FIG. 1B is a block diagram of another exemplary system for
transparent TCP offload, in accordance with an embodiment of the
invention. Referring to FIG. 1B, the system may comprise, for
example, a CPU 102, a host memory 106, a dedicated memory 116 and a
chip set 118. The chip set 118 may comprise, for example, the
network subsystem 110 and the memory controller 104. The chip set
118 may be coupled to the CPU 102, to the host memory 106, to the
dedicated memory 116 and to the Ethernet 112. The network subsystem
110 of the chip set 118 may be coupled to the Ethernet 112. The
network subsystem 110 may comprise, for example, the TTEEC/TTOE 114
that may be coupled to the Ethernet 112. The network subsystem 110
may communicate to the Ethernet 112 via a wired and/or a wireless
connection, for example. The wireless connection may be a wireless
local area network (WLAN) connection as supported by the IEEE
802.11 standards, for example. The network subsystem 110 may also
comprise, for example, an on-chip memory 113. The dedicated memory
1 16 may provide buffers for context and/or data.
[0023] The network subsystem 110 may comprise a processor such as a
coalescer 111. The coalescer 111 may comprise suitable logic,
circuitry and/or code that may be enabled to handle the
accumulation or coalescing of TCP data. In this regard, the
coalescer 111 may utilize a flow lookup table (FLT) to maintain
information regarding current network flows for which TCP segments
are being collected for aggregation. The FLT may be stored in, for
example, the network subsystem 110. The FLT may comprise at least
one of the following: a source IP address, a destination IP
address, a source TCP address, a destination TCP address, for
example. In an alternative embodiment of the invention, at least
two different tables may be utilized, for example, a table
comprising a 4-tuple lookup to classify incoming packets according
to their flow. The 4-tuple lookup table may comprise at least one
of the following: a source IP address, a destination IP address, a
source TCP address, a destination TCP address, for example. A flow
context table may comprise state variables utilized for aggregation
such as TCP sequence numbers.
[0024] The FLT may also comprise at least one of a host buffer or
memory address including a scatter-gather-list (SGL) for
non-continuous memory, a cumulative acknowledgments (ACKs), a copy
of a TCP header and options, a copy of an IP header and options, a
copy of an Ethernet header, and/or accumulated TCP flags, for
example. The coalescer 111 may be enabled to generate a single
aggregated TCP segment from the accumulated or collected TCP
segments when a termination event occurs. The single aggregated TCP
segment may be communicated to the host memory 106, for
example.
[0025] Although illustrated, for example, as a CPU and an Ethernet,
the present invention need not be so limited to such examples and
may employ, for example, any type of processor and any type of data
link layer or physical media, respectively. Accordingly, although
illustrated as coupled to the Ethernet 112, the TTEEC or the TTOE
114 of FIG. 1A may be adapted for any type of data link layer or
physical media. Furthermore, the present invention also
contemplates different degrees of integration and separation
between the components illustrated in FIGS. 1A-B. For example, the
TTEECFTTOE 114 may be a separate integrated chip from the chip set
118 embedded on a motherboard or may be embedded in a NIC.
Similarly, the coalescer 111 may be a separate integrated chip from
the chip set 118 embedded on a motherboard or may be embedded in a
NIC. In addition, the dedicated memory 116 may be integrated with
the chip set 118 or may be integrated with the network subsystem
110 of FIG. 1B.
[0026] FIG. 1C is an alternative embodiment of an exemplary system
for transparent TCP offload, in accordance with an embodiment of
the invention. Referring to FIG. 1C, there is shown a host
processor 124, a host memory/buffer 126, a software algorithm block
134 and a NIC block 128. The NIC block 128 may comprise a NIC
processor 130, a processor such as a coalescer 131 and a reduced
NIC memory/buffer block 132. The NIC block 128 may communicate with
an external network via a wired and/or a wireless connection, for
example. The wireless connection may be a wireless local area
network (WLAN) connection as supported by the IEEE 802.11
standards, for example.
[0027] The coalescer 131 may be a dedicated processor or hardware
state machine that may reside in the packet-receiving path. The
host TCP stack may comprise software that enables management of the
TCP protocol processing and may be part of an operating system,
such as Microsoft Windows or Linux. The coalescer 131 may comprise
suitable logic, circuitry and/or code that may enable accumulation
or coalescing of TCP data. In this regard, the coalescer 131 may
utilize a flow lookup table (FLT) to maintain information regarding
current network flows for which TCP segments are being collected
for aggregation. The FLT may be stored in, for example, the reduced
NIC memory/buffer block 132. The coalescer 131 may enable
generation of a single aggregated TCP segment from the accumulated
or collected TCP segments when a termination event occurs. The
single aggregated TCP segment may be communicated to the host
memory/buffer 126, for example.
[0028] In accordance with certain embodiments of the invention,
providing a single aggregated TCP segment to the host for TCP
processing significantly reduces overhead processing by the host
124. Furthermore, since there is no transfer of TCP state
information, dedicated hardware such as a NIC 128 may assist with
the processing of received TCP segments by coalescing or
aggregating multiple received TCP segments so as to reduce
per-packet processing overhead.
[0029] In conventional TCP processing systems, it is necessary to
know certain information about a TCP connection prior to arrival of
a first segment for that TCP connection. In accordance with various
embodiments of the invention, it is not necessary to know about the
TCP connection prior to arrival of the first TCP segment since the
TCP state or context information is still solely managed by the
host TCP stack and there is no transfer of state information
between the hardware stack and the software stack at any given
time.
[0030] In an embodiment of the invention, an offload mechanism may
be provided that is stateless from the host stack perspective,
while state-full from the offload device perspective, achieving
comparable performance gain when compared to TTOE. Transparent TCP
offload (TTO) reduces the host processing power required for TCP by
allowing the host system to process both receive and transmit data
units that are bigger than a MTU. In an exemplary embodiment of the
invention, 64 KB of processing data units (PDUs) may be processed
rather than 1.5 KB PDUs in order to produce a significant reduction
in the packet rate, thus reducing the host processing power for
packet processing.
[0031] In TTO, no handshake may be utilized between the host
operating system and the NIC containing the TTO engine. The TTO
engine may operate autonomously in identifying new flows and for
offloading. The offload on the transmit side may be similar to LSO,
where the host sends big transmission units and the TTO engine cuts
them to smaller transmitted packets according to maximum segment
size (MSS).
[0032] Transparent TCP offload on the receive side may be performed
by aggregating a plurality of received packets of the same flow and
delivering them to the host as if they were received in one
packet--one bigger packet in the case of received data packets, and
one aggregate ACK packet in the case of received ACK packets. The
processing in the host may be similar to the processing of a big
packet that was received. In the case of TCP flow aggregation,
rules may be defined to determine whether to aggregate packets. The
aggregation rules may be established to allow as much aggregation
as possible, without increasing the round trip time such that the
decision whether to aggregate depends on the data that is received
and the importance of delivering it to the host without delay. The
aggregation may be implemented with transmit-receive coupling,
wherein the transmitter and receiver are coupled, by utilizing
transmission information for offload decisions, and the flow may be
treated as a bidirectional flow. The context information of the
receive offload in TTO may be maintained per flow. In this regard,
for every received packet, the incoming packet header may be
utilized to detect the flow it belongs to and this packet updates
the context of the flow.
[0033] When the transmitter and receiver are coupled, the
transmitted network packets may be searched along with the received
network packets to determine the particular network flow to which
the packet belongs. The transmitted network packet may enable
updating of the context of the flow, which may be utilized for
receive offload.
[0034] FIG. 1D is a block diagram of a system for handling
transparent TCP offload, in accordance with an embodiment of the
invention. Referring to FIG. 1D, there is shown an incoming packet
frame 141, a frame parser 143, an association block 149, a context
fetch block 151, a plurality of on-chip cache blocks 147, a
plurality of off-chip storage blocks 160, a plurality of on-chip
storage blocks 162, a RX processing block 150, a frame buffer 154,
a DMA engine 163, a TCP code block 157, a host bus 165, and a
plurality of host buffers 167. The RX processing block 150 may
comprise a coalescer 152.
[0035] The frame parser 143 may comprise suitable logic, circuitry
and/or code that may enable L2 Ethernet processing including, for
example, address filtering, frame validity and error detection of
the incoming frames 141. Unlike an ordinary Ethernet controller,
the next stage of processing may comprise, for example, L3 such as
IP processing and L4 such as TCP processing within the frame parser
143. The TTEEC 114 may reduce the host CPU 102 utilization and
memory bandwidth, for example, by processing traffic on coalesced
TCP/IP flows. The TTEEC 114 may detect, for example, the protocol
to which incoming packets belong based on the packet parsing
information and tuple 145. If the protocol is TCP, then the TTEEC
114 may detect whether the packet corresponds to an offloaded TCP
flow, for example, a flow for which at least some TCP state
information may be kept by the TTEEC 114. If the packet corresponds
to an offloaded connection, then the TTEEC 114 may direct data
movement of the data payload portion of the frame. The destination
of the payload data may be determined from the flow state
information in combination with direction information within the
frame. The destination may be a host memory 106, for example.
Finally, the TTEEC 114 may update its internal TCP and higher
levels of flow state, without any coordination with the state of
the connection on the host TCP stack, and may obtain the host
buffer address and length from its internal flow state.
[0036] The receive system architecture may comprise, for example, a
control path processing 140 and data movement engine 142. The
system components above the control path as illustrated in upper
portion of FIG. 1D, may be designed to deal with the various
processing stages used to complete, for example, the L3/L4 or
higher processing with maximal flexibility and efficiency and
targeting wire speed. The result of the stages of processing may
comprise, for example, one or more packet identification cards that
may provide a control structure that may carry information
associated with the frame payload data. This may have been
generated inside the TTEEC 114 while processing the packet in the
various blocks. A data path 142 may move the payload data portions
or raw packets 155 of a frame along from, for example, an on-chip
packet frame buffer 154 and upon control processing completion, to
a direct memory access (DMA) engine 163 and subsequently to the
host buffer 167 via the host bus 165 that was chosen via
processing. The data path 142 to the DMA engine may comprise packet
data are and optional headers 161.
[0037] The receiving system may perform, for example, one or more
of the following: parsing the TCP/IP headers 145; associating the
frame with a TCP/IP flow in the association block 149; fetching the
TCP flow context in the context fetch block 151; processing the
TCP/IP headers in the RX processing block 150; determining
header/data boundaries and updating state 153; mapping the data to
a host buffers; and transferring the data via a DMA engine 163 into
these host buffers 167. The headers may be consumed on chip or
transferred to the host buffers 167 via the DMA engine 163.
[0038] The packet frame buffer 154 may be an optional block in the
receive system architecture. It may be utilized for the same
purpose as, for example, a first-in-first-out (FIFO) data structure
is used in a conventional L2 NIC or for storing higher layer
traffic for additional processing. The packet frame buffer 154 in
the receive system may not be limited to a single instance. As
control path 140 processing is performed, the data path 142 may
store the data between data processing stages one or more
times.
[0039] In an exemplary embodiment of the invention, at least a
portion of the coalescing operations described for the coalescer
111 in FIG. 1B and/or for the coalescer 131 in FIG. 1C may be
implemented in a coalescer 152 in the RX processing block 150 in
FIG. 1D. In this instance, buffering or storage of TCP data may be
performed by, for example, the frame buffer 154. Moreover, the FLT
utilized by the coalescer 152 may be implemented using the off-chip
storage 160 and/or the on-chip storage 162, for example.
[0040] In an embodiment of the invention, a new flow may be
detected at some point during the flow lifetime. The flow state is
unknown when the new flow is detected and the first packets are
utilized to update the flow state until the flow is known to be
in-order. A device performing TTO may also support other offload
types, for example, TOE, RDMA, or iSCSI offload. In this case, the
FLT for TTO may be shared with the connection search for other
offload types with each entry in the FLT indicating the offload
type for that flow. Packets that belong to flows of other offload
types may not be candidates for TTO. Upon detecting a new flow, the
flow may be initiated with the basic initialization context. An
entry in the FLT with a flow ID may be created.
[0041] In another embodiment of the invention, a plurality of
segments of the same flow may be aggregated in TTO up to a receive
aggregation length (RAL), presenting to the host a bigger segment
for processing. If aggregation is allowed, the received packet may
be placed in the host memory 126 but will not be delivered to the
host. Instead, the host processor 124 may update the context of the
flow this packet belongs to. The new incoming packet may either
cause the packet to be delivered immediately alone if there were no
prior aggregated packets that were not delivered or as a single
packet that represents both that packet and the previously received
packets. In another embodiment of the invention, the packet may not
be delivered but may update the flow's context.
[0042] A termination event may occur and the packet may not be
aggregated if at least one of the following occurs at the TCP
level: (1) the data is not in-order as derived from the received
sequence number (SN) and the flow's context; (2) at least one
packet with TCP flags other than ACK flag, for example, a PUSH flag
is detected; (3) at least one packet with selective acknowledgement
(SACK) information is detected; or (4) if the ACK SN received is
bigger than the delivered ACK SN, and requires stopping the
aggregation. Similarly, a termination event may occur and the
packet may not be aggregated if at least one of the following
occurs at the IP level: (1) the type of service (TOS) field in the
IP header is different than the TOS field of the previous packets
that were aggregated; or (2) the received packet is an IP
fragment.
[0043] When aggregating a plurality of packets to a single packet,
the aggregated packet's header may contain the aggregated header of
all the individual packets it contains. In an exemplary embodiment
of the invention, a plurality of TCP rules for the aggregation may
be as follows. For example, (1) the SN in the aggregated header is
the SN of the first or oldest packet; (2) the ACK SN is the SN of
the last or youngest segment; (3) the length of the aggregated
header is the sum of the lengths of all the aggregated packets; (4)
the window in the aggregated header is the window received in the
last or youngest aggregated packet; (5) the time stamp (TS) in the
aggregated header is the TS received in the first or oldest
aggregated packet; (6) the TS-echo in the aggregated header is the
TS-echo received in the first or oldest aggregated packet; and (7)
the checksum in the aggregated header is the accumulated checksum
of all aggregated packets.
[0044] In an exemplary embodiment of the invention, a plurality of
IP field aggregation rules may be provided. For example, (1) the
TOS of the aggregated header may be that of all the aggregated
packets; (2) the time-to-live (TTL) of the aggregated header is the
minimum of all incoming TTLs; (3) the length of the aggregated
header is the sum of the lengths in the aggregated packets; (4) the
fragment offset of the aggregated header may be zero for aggregated
packets; and (5) the packet ID of the aggregated header is the last
ID received.
[0045] The received packets may be aggregated until the received
packet cannot be aggregated due to the occurrence of a termination
event, or if a timeout has expired on that flow, or if the
aggregated packet exceeds RAL. The timeout may be implemented by
setting a timeout to a value, timeout aggregation value, when the
first packet on a flow is placed without delivery. The following
packets that are aggregated may not change the timeout. When the
packets are delivered due to timeout expiration the timeout may be
canceled and may be set again in the next first packet that is not
delivered. Notwithstanding, other embodiments of the invention may
provide timeout implementation by periodically scanning all the
flows.
[0046] In an exemplary embodiment of the invention, the received
ACK SN may be relevant to determine the rules to aggregate pure
ACKs and to determine the rules to stop aggregation of packets with
data due to the received ACK SN. The duplicated pure ACKs may never
be aggregated. When duplicated pure ACKs are received, they may
cause prior aggregated packets to be delivered and the pure ACK may
be delivered immediately separately. The received ACK SN may also
be utilized to stop the aggregation and deliver the pending
aggregated packet to the host TCP/IP stack.
[0047] In an exemplary embodiment of the invention, a plurality of
rules may be provided for stopping the aggregation according to the
ACK SN. For example, (1) if the number of acknowledged (ACKed)
bytes that are not yet delivered, taking into account the received
segments and the prior segments that were not delivered exceeds a
threshold, ReceiveAckedBytesAggretation, for example, in bytes; or
(2) the time from the arrival of the first packet that advanced the
received ACK SN exceeds a threshold, TimeoutAckAggregation, for
example. For this purpose, a second timer per flow may be required
or other mechanisms, such as periodically scanning the flows may be
implemented.
[0048] In another exemplary embodiment of the invention, the flows
may be removed from the host memory if one of the following occurs:
(1) a reset (RST) flag was detected in the receive side; (2) a
finish (FIN) flag was detected in the receive side; (3) there was
no receive activity on the flow for a predefined time
TerminateNoActivityTime, for example; (4) a KeepAlive packet in the
receive direction was not acknowledged. A least recently used (LRU)
cache may be used instead of a timeout rule to remove the flows
from the host memory.
[0049] In another exemplary embodiment of the invention, the flows
may be removed from the host memory if the flow was closed due to a
retransmission timeout that requires information from the
transmitter. In one exemplary embodiment of the invention,
retransmission timeout may comprise periodically scanning all the
flows to determine if any flow is closed. The period for scanning
may be low, for example, 5 seconds. In each scan, if there is
unacknowledged data that was transmitted by the NIC 128 the maximum
transmitted sequence number (SN) may be recorded. Additionally, if
there is unacknowledged data that was transmitted by the peer side,
the maximum received SN may be recorded. If in two consequent scans
there is pending data on same flow of the same type with the
recorded number unchanged, pending data that was not acknowledged
for the entire scan period may be indicated. In this case the flow
may be removed.
[0050] FIG. 2A is a diagram illustrating exemplary steps that may
be utilized for handling out-of-order data when a packet P3 and a
packet P4 arrive out-of-order with respect to the order of
transmission, in accordance with an embodiment of the invention.
Regarding FIG. 2A, the packets P3 and P4 may arrive in-order with
respect to each other at the NIC 128 but before the arrival of a
packet P2, as shown in the actual receive RX traffic pattern 200.
The packets P3 and P4 may correspond to a fourth packet and a fifth
packet within an isle 211, respectively, in a TCP transmission
sequence. In this case, there is a gap or time interval between the
end of the packet P1 and the beginning of the packet P3 in the
actual receive RX traffic pattern 200. A first disjoint portion in
the TCP transmission sequence may result from the arrival of the
packets P3 and P4 as shown in the TCP receive sequence space 202
after the isle 213 comprising packets P0 and P1. The rightmost
portion of isle 211 rcv_nxt_R may be represented as
(rcv_nxt_L+(length of isle)), where rcv_nxt_L is the leftmost
portion of isle 211 and the length of isle is the sum of the
lengths of packets P3 and P4.
[0051] FIG. 2B is a state diagram illustrating exemplary
transparent TCP offload with transmit-receive coupling, in
accordance with an embodiment of the invention. Referring to FIG.
2B, there is shown a plurality of exemplary flow states, namely, in
order state 226, Out-Of-Order (OOO) state 224, or an unknown state
222. In transition state 228, the unknown state 222 may be detected
for a flow for which a 3-way TCP handshake has not been detected or
at some point in the life of the flow other than the initialization
phase. The offload engine may also track outgoing and incoming TCP
segments with a set synchronous (SYN) flag to detect a new flow.
The exemplary transition states may be implemented as a state
machine.
[0052] The TCP 3-way handshake begins with a synchronize (SYN)
segment containing an initial send sequence number (ISN) being
chosen by, and sent from a first host. This sequence number may be
the starting sequence number of the data in that packet and may
increment for every byte of data sent within the segment. When the
second host receives the SYN with the sequence number, it may
transmit a SYN segment with its own totally independent ISN number
in the sequence number field along with an acknowledgment field.
This acknowledgment (ACK) field may inform the recipient that its
data was received at the other end and it expects the next segment
of data bytes to be sent, and may be referred to as the SYN-ACK.
When the first host receives this SYN-ACK segment it may send an
ACK segment containing the next sequence number, called forward
acknowledgement and is received by the second host. The ACK segment
may be identified by the ACK field being set. Segments that are not
acknowledged within a certain period of time may be
retransmitted.
[0053] When a flow is transparent TCP offloaded, the flow may not
move from the in order state 226 and OOO state 224 to the unknown
state 222 unless it gets removed and detected again. In transition
state 230, the state diagram may track the out-of-order isle
sequence number boundaries using, for example, the parameters
rcv_nxt_R and rcv_nxt_L as illustrated in FIG. 2C. The first
ingress segment may be referred to as an isle, for example, isle
213 (FIG. 2C) and the ordering state may be set to OOO state 224.
The rightmost portion of isle rcv_nxt_R may be represented as
(rcv_nxt_L+(length of isle)), where rcv_nxt_L is the leftmost
portion of isle and the length of isle is the sum of the lengths of
packets in the isle.
[0054] In transition state 232, the NIC 128 may access the local
stack acknowledgment information as the transmitter and receiver
are coupled. The ordering state may be modified from OOO state 224
to in-order state 226 whenever an egress ACK sequence number is
greater than an isle length of at least one TCP segment.
[0055] In transition state 234, the initial ordering state may be
set to the in order state 226, if the new flow is detected with the
TCP 3-way handshake. In transition state 236, the rcv_nxt_R may be
utilized to check the condition of ingress packets according to the
following algorithm. TABLE-US-00001 If (in_packet_sn==rcv_nxt_R) //
when isle is increased update rcv_nxt_L rcv_nxt_R = in_packet_sn +
in_packet_len
[0056] In transition state 238, the ordering state may be modified
from in order state 226 to OOO state 224 if the isle length is not
equal to rcv_nxt_R. The value of rcv_nxt_R may be used to check the
condition of ingress packets according to the following algorithm.
TABLE-US-00002 If (in_packet_sn != rcv_nxt_R) rcv_nxt_L =
in_packet_sn rcv_nxt_R = in_packet_sn + in_packet_len change state
to OOO 224.
[0057] In transition state 240, during OOO state 224, the
boundaries of the highest OOO isle may be tracked for every ingress
packet using the following exemplary algorithm. TABLE-US-00003 If
(in_packet_sn==rcv_nxt_R) // when the isle is increased update
rcv_nxt_L rcv_nxt_R = in_packet_sn + in_packet_len else if
(in_packet_sn > rcv_nxt_R) // when a new higher isle is
generated rcv_nxt_R = in_packet_sn + in_packet_len rcv_nxt_L =
in_packet_sn
[0058] In another embodiment of the invention, the number of ACKed
bytes that have not yet been delivered may exceed a fraction of the
pending transmitted bytes that were not ACKed. The pending
transmitted bytes count may be calculated as the difference between
sndMax, the most advance sequence number (SN) that was transmitted
and the last received ACK SN that was delivered.
[0059] In another embodiment of the invention, the number of ACKed
bytes may exceed a dynamic threshold. This threshold may depend on
the size of the packets that were transmitted to the peer. The
sizes of the transmitted packets or the size of the transmission
units that were sent to the chip to be transmitted and were not yet
ACKed may be recorded. In case of LSO, the aggregation may be set
to ACK blocks of data similar to the transmitted data units.
[0060] FIG. 3 is a flow chart illustrating exemplary steps for
transparent TCP offload with per flow estimation of far end
transmit window, in accordance with an embodiment of the invention.
Referring to FIG. 3, in step 302, each of the received TCP segments
and the transmitted TCP segments may be monitored to determine
which network flow they belong to based on their respective header
information. In step 303, it may be determined whether the received
TCP segments are for a particular network flow. If the received TCP
segments are not for a particular network flow, control passes to
step 302. If the received TCP segments are for a particular network
flow, control passes to step 304. In step 304, a network interface
card (NIC) processor 130 may enable collection of at least one
received TCP segment for a determined network flow. In step 306,
the NIC 128 enables storage of state information for the received
TCP segment and state information for transmitted TCP segments for
the determined network flow without transferring state information
for the received TCP segment and the state information for the
transmitted TCP segments to a host system 124 communicatively
coupled to the NIC 128. In step 308, the NIC 128 may determine the
time period over which the received TCP segments are aggregated
before transmitting them to the host system 124. The time period
for aggregation may be a minimum of a time period for a termination
event to occur and a time period for the far-end effective transmit
window. In step 310, the far-end effective transmit window may be
calculated as a maximum value of (rcv_nxt_R-ack_sn) observed since
the flow started in an in-order state, where rcv_nxt_R may
represent the sequence number of the next expected TCP segment and
ack_sn may represent a sequence number of the next received
acknowledgement packet from the host system.
[0061] In step 312, a new TCP segment may be generated by
aggregating at least a portion of a plurality of the collected TCP
segments for the determined network flow over the determined period
of time. In step 314, the NIC 128 enables communication of the
generated new TCP segment, new state information for the new TCP
segment, and the state information for the transmitted TCP segments
to the host system 124 for TCP offload.
[0062] Another embodiment of the invention may provide a
machine-readable storage, having stored thereon, a computer program
having at least one code section executable by a machine, thereby
causing the machine to perform the steps as described above for
transparent TCP offload with per flow estimation of a far end
transmit window.
[0063] Accordingly, the present invention may be realized in
hardware, software, or a combination of hardware and software. The
present invention may be realized in a centralized fashion in at
least one computer system, or in a distributed fashion where
different elements are spread across several interconnected
computer systems. Any kind of computer system or other apparatus
adapted for carrying out the methods described herein is suited. A
typical combination of hardware and software may be a
general-purpose computer system with a computer program that, when
being loaded and executed, controls the computer system such that
it carries out the methods described herein.
[0064] The present invention may also be embedded in a computer
program product, which comprises all the features enabling the
implementation of the methods described herein, and which when
loaded in a computer system is able to carry out these methods.
Computer program in the present context means any expression, in
any language, code or notation, of a set of instructions intended
to cause a system having an information processing capability to
perform a particular function either directly or after either or
both of the following: a) conversion to another language, code or
notation; b) reproduction in a different material form.
[0065] While the present invention has been described with
reference to certain embodiments, it will be understood by those
skilled in the art that various changes may be made and equivalents
may be substituted without departing from the scope of the present
invention. In addition, many modifications may be made to adapt a
particular situation or material to the teachings of the present
invention without departing from its scope. Therefore, it is
intended that the present invention not be limited to the
particular embodiment disclosed, but that the present invention
will include all embodiments falling within the scope of the
appended claims.
* * * * *