U.S. patent application number 11/857850 was filed with the patent office on 2009-03-19 for phy bandwidth estimation from backpressure patterns.
This patent application is currently assigned to AGERE SYSTEMS INC.. Invention is credited to Codrut Radu Radulescu.
Application Number | 20090073889 11/857850 |
Document ID | / |
Family ID | 40454338 |
Filed Date | 2009-03-19 |
United States Patent
Application |
20090073889 |
Kind Code |
A1 |
Radulescu; Codrut Radu |
March 19, 2009 |
PHY BANDWIDTH ESTIMATION FROM BACKPRESSURE PATTERNS
Abstract
The present invention provides a system and method of
determining available bandwidth at a physical layer (PHY) device at
a server on a broadband network. A link layer controller of a
master administrator adaptively polls a PHY device over a set of
time intervals. During polling, the controller places a PHY
device's address on a line of a bus and awaits a response from the
PHY device. Based upon the response from the PHY device, the
administrator can determine whether the PHY device has available
bandwidth. The link layer controller uses this information to
recalculate its polling scheme to better make use of the available
bandwidth over the shared transmission medium to which each PHY
device in the network is attached.
Inventors: |
Radulescu; Codrut Radu;
(Bridgewater, NJ) |
Correspondence
Address: |
FOX ROTHSCHILD LLP
2000 MARKET STREET, 10th Floor
PHILADELPHIA
PA
19103
US
|
Assignee: |
AGERE SYSTEMS INC.
Allentown
PA
|
Family ID: |
40454338 |
Appl. No.: |
11/857850 |
Filed: |
September 19, 2007 |
Current U.S.
Class: |
370/252 ;
370/449 |
Current CPC
Class: |
H04L 12/403 20130101;
H04L 43/0805 20130101 |
Class at
Publication: |
370/252 ;
370/449 |
International
Class: |
G06F 11/00 20060101
G06F011/00; H04L 12/403 20060101 H04L012/403 |
Claims
1. A method for determining available bandwidth at a physical layer
(PHY) device at a network administration server, the method
comprising the steps of: polling by a controller said PHY device at
predetermined time intervals; receiving at said administrator a
series of responses from said PHY device; and determining available
bandwidth at said PHY device based upon said responses.
2. The method of claim 1, wherein said polling comprises
transmitting an inquiry as to the status of an incoming packet
buffer of said PHY device.
3. The method of claim 2, wherein said responses from said PHY
device are indicative of the current status of said incoming packet
buffer.
4. The method of claim 1 further comprising the steps of: comparing
said responses to determine an optimized time interval; and polling
said PHY device at said optimized time interval.
5. The method of claim 4, wherein said optimized time interval is
determined based upon the current performance level of said PHY
device.
6. The method of claim 5, wherein said performance level of said
PHY device is determined from the available bandwidth at said PHY
device.
7. The method of claim 6, wherein said PHY device is attached to a
common bus shared with a plurality of other PHY devices.
8. The method of claim 7, wherein said PHY device and said
plurality of other PHY devices are ports for connecting nodes of a
broadband network.
9. A system for determining available bandwidth at a physical layer
(PHY) device at a network administration server comprising: a
controller for polling said PHY device at predetermined time
intervals; a PHY device operably connected to said administrator
for responding to said polling with a series of responses; and a
response measurement unit for receiving said responses from said
PHY device and determining available bandwidth at said PHY device
based upon said responses.
10. The system of claim 9, wherein said polling comprises
transmitting an inquiry as to the status of an incoming packet
buffer of said PHY device.
11. The system of claim 10, wherein said responses from said PHY
device are indicative of the current status of said incoming packet
buffer.
12. The system of claim 9 wherein said response measurement unit is
further programmed to compare said responses to determine an
optimized time interval and instruct said administrator to poll
said PHY device at said optimized time interval.
13. The system of claim 12, wherein said optimized time interval is
determined based upon the current performance level of said PHY
device.
14. The system of claim 13, wherein said performance level of said
PHY device is determined from the available bandwidth at said PHY
device.
15. The system of claim 14, wherein said PHY device is attached to
a common bus shared with a plurality of other PHY devices.
16. The system of claim 15, wherein said PHY device and said
plurality of other PHY devices are ports for connecting nodes of a
broadband network.
Description
FIELD OF THE INVENTION
[0001] The invention relates to physical (PHY) device monitoring,
and particularly to monitoring bandwidth in PHY devices.
BACKGROUND OF THE INVENTION
[0002] As computer network related technology has become faster and
cheaper, higher performance computer networks have spread rapidly.
One common example of a high performance computer network is a
broadband network distributed amongst a grouping of consumers used
for Internet access. One typical type of a broadband network is a
cable-based Internet provider, such as those provided by television
cable companies. Cable networks provide affordable, high speed
Internet to anyone hardwired to a cable television network. Another
typical broadband network is a Digital Subscriber Line (DSL)
network, such as those provided by telephone providers. Similar to
a cable network, a DSL network utilizes existing phone lines to
offer a high speed alternative to dial-up Internet access. A third,
newer type of broadband network is a wireless broadband network
such as those provided by wireless telephone companies. A user
accesses these networks by integrating a wireless broadband network
card into their computer for receiving broadband network
signals.
[0003] While broadband networks provide many benefits to users,
such as the aforementioned high speed and low cost, several
drawbacks are common. One such drawback is that on a typical
broadband network, all consumer or end user devices are connected
to a master administrator via a physical layer (PHY) device at the
master administrator. For example, on a typical cable network, each
customer is connected to a master administrator by a PHY device.
Essentially, each PHY device functions as a port used for accessing
the network by a client device. Each PHY device is connected to
control circuitry of the master administrator via a shared bus. Any
information sent to an individual PHY device is sent along this
shared bus. If a target PHY device has low or no bandwidth
available when a message is sent, then the message cannot be
received and must be resent, effectively wasting buss time as no
other PHY device can communicate while the bus is sending a message
to another PHY device.
[0004] One solution to monitoring the bandwidth at each PHY device
is a technique involving constant monitoring of the available
bandwidth at each PHY device such that no transmissions are sent to
a PHY device that is currently unable to receive data. This is done
by a master administrator that constantly polls (or sends signals
to a device and monitors the device's response) all PHY devices.
However, this is resource and time consuming at the master
administrator as the master administrator is generally required to
constantly poll PHY devices. This technique also wastes bandwidth
on the bus as polling each device requires additional time
utilizing the bus. While overall this technique achieves the
desired goal of monitoring the bandwidth of each available device,
the technique necessitates an inefficient use of resources
available in the master administrator.
[0005] What is needed is a technique that utilizes dynamic polling
monitored and refined over a period of time such that the typical
available bandwidth of a PHY device can be monitored and utilized
to create a schedule. This schedule can be used for transmitting
data to a PHY device at times when the device is highly likely to
be able to receive a transmission.
SUMMARY OF THE INVENTION
[0006] The present invention provides a system and method of
determining available bandwidth at a physical layer (PHY) device on
a broadband network. A link layer controller of a master
administrator adaptively polls a PHY device over a set of time
intervals. During polling, the controller places a PHY device's
address on a line of a bus and awaits a response from the PHY
device. Based upon the response from the PHY device, the
administrator can determine whether the PHY device has available
bandwidth. The link layer controller uses this information to
recalculate its polling scheme to better make use of the available
bandwidth over the shared transmission medium to which each PHY
device in the network is attached.
[0007] In one embodiment of the present invention, a link layer
controller of a master administrator polls a first PHY device.
During polling, the link layer controller places the address of a
first PHY device a line of the bus, and the PHY device responds
with an indication of whether its incoming packet buffer is full.
Upon receiving a positive notification ( ready to transfer data),
the network administration server ceases further polling the device
and initiates a data transfer. After a period of time, the network
administration server will again begin polling the first PHY device
again. As before, the address of the PHY device is sent to the
device and an indication is received indicating the current state
of the PHY device's incoming packet buffer. The link layer
controller repeats the polling of the device until the PHY device
responds with an indication that the incoming packet buffer of the
PHY device is ready to accept a new data packet. After several
repetitions of these steps, the most efficient polling schedule for
that device can be determined, one which maximizes the use of the
available bandwidth at the PHY device without overfilling the PHY
internal buffer, or allowing the incoming packet buffer to sit
empty. This would consist of only one polling indicating the buffer
ready status and eventually an immediately preceding polling that
would indicate the unavailability of the buffer. This way the
accurate moment in time the buffer crosses the threshold ca be
determined and also the PHY bandwidth can be extracted.
[0008] By extending this polling scheme to each PHY device on the
bus, the link layer controller can accurately determine the
available bandwidth at each PHY device and create a schedule for
transmitting data to each PHY device that most efficiently utilizes
the bandwidth of the shared transmission medium each PHY device
communicates with the link layer controller on.
BRIEF DESCRIPTION OF THE FIGURES
[0009] FIG. 1 is a diagram of a master administrator.
[0010] FIG. 2 is a diagram of a higher level architectural view of
a master administrator.
[0011] FIG. 3 is a flowchart illustrating the process followed in
one embodiment of the present invention.
[0012] FIGS. 4 is a timing diagram showing the operation of the
present invention according to the embodiment of the present
invention illustrated in FIG. 2 and the process described in FIG.
3.
DETAILED DESCRIPTION
[0013] The present invention provides a method and system for
monitoring available bandwidth at physical layer (PHY) devices in a
master administrator for a broadband network. By adaptively polling
the devices at specific time intervals, a more efficient monitoring
procedure can be created for an individual PHY device that better
utilizes available resources than previous polling procedures.
Adaptively polling refers to a polling schedule that can be
dynamically altered. If, for example, a device is found to be
sitting idle for long periods of time, its polling schedule will be
altered to eliminate these periods of idleness.
[0014] FIG. 1 illustrates a diagram of a typical master
administrator 100. Link layer controller 102 communicates with a
series of PHY devices. These PHY devices are used by broadband
service clients to access the Internet, send and receive email,
utilize voice over IP telephone service, etc. PHY devices 108-1,
108-2, through 108-n are all operably connected to link layer 102,
and through bus 106. PHY devices are the actual physical layer
connections used by a client to access the available network
resources. Each PHY device has an incoming packet buffer which is
used to store incoming packets from a client computer until the
packets can be processed. By monitoring the incoming packet buffer
full level of a PHY device, the master administrator can accurately
predict what the available bandwidth at each PHY device will be.
However, to accurately predict the available bandwidth, the link
layer controller must poll the PHY devices over bus 106. However,
shared busses, such as bus 106, include inherent scheduling issues.
Only one device can be polled at a time. By carefully scheduling
the polling of each PHY device, the efficiency of the shared bus,
in this case Bus 106, can be increased.
[0015] FIG. 2 shows a more detailed view of master administrator
100, including the link layer controller 102. In link layer
controller 102, a group of latency and window width registers (WWR)
204-1, 204-2 through 204-n (corresponding to PHY devices 108-1
through 108-n respectively) store a Maximum Count value (MC) and
Window Width Count (WWC) indicating a polling interval for each PHY
device. The MC and WWC are played in sequence, i.e., WWC is
triggered immediately after MC expiration. Polling of a PHY device
includes placing the address of the polled PHY device on the bus
and receiving a response. By monitoring this response, the link
layer controller can continually adjust the polling interval to
determine an optimal polling schedule for each individual PHY
device.
[0016] The latency registers pass the appropriate MC and WWC to the
PHY counters 206-1, 206-2 through 206-n (again, corresponding to
PHY devices 108-1 through 108-n respectively). Each PHY counter
uses the MC and WWC supplied from the latency register to determine
when its individual PHY device is to be polled. Once MC timer
expires the control unit determines if the FFS would be issued or
not and continues timing the WWC for the next polling action to
determine the FCS moment.
[0017] Once a delayed polling request for a new PHY device reaches
the top of the promiscuous polling queue, i.e., each device
scheduled to be polled ahead of the new PHY device has been polled,
polling is initiated by Polling Queue 208. To initiate polling, the
address of the PHY device to be polled is transmitted on Physical
Address line 216a to indicate to a PHY device that it is being
polled. Once the initial signal is sent, the PHY device responds to
the polling over CLAV line 216c. It should be noted that in this
example, bus 106 (from FIG. 1) includes lines Physical Address line
216a, Data line 216b and CLAV line 216c. CLAV is a control signal
used by the system to indicate whether the PHY device is able to
receive packets, or if the device's incoming buffer is full,
rendering the device unable to receive incoming packets. Response
Measurement Status register 212 monitors the CLAV signals from each
PHY device.
[0018] Once a CLAV signals for tFFS and tFCS are received from the
PHY, the Response Measurement Status register 212 records the
related times of the polling from timer 214 and determines an
updated MC and WWC values. Once the PHY associated registers have
the updated values, they are loaded into associated PC counter upon
expiration. The PHY counter restarts its countdown to zero,
starting from the updated MC then is reloaded with WWC and counts
down to 0. The cycle repeats. The control system may decide to
actually place the CLAV test events on the bus or in the
promiscuous sampling queue based on ether events dependency like
the transmission completion of another packet since last poll or
when it is preempted by another PHY poll. By constantly monitoring
the polling results and updating the MC, WWC values, the network
administration server is better able to schedule data transfers to
and from the PHY devices, since it can accurately monitor the
performance (and subsequent bandwidth) of the PHY devices based
upon their polling schedules.
[0019] FIG. 3 shows a detailed flow chart following one embodiment
of a polling process for an individual PHY device. In this
embodiment, the polling process of PHY device 108-1 is
followed.
[0020] In step 302, the PHY counter loads the MC value from the
latency register. In this example, PHY counter 206-1 will load the
stored MC value from latency register 204-1. If this is the first
time that PHY device 108-1 will be polled, a one will be loaded
from the latency register indicating that one PHY cycle later PHY
device 108-1 will be placed in the polling queue. Once the PHY
counter loads the MC value from the latency register, the process
continues to step 304.
[0021] At step 304, the PHY counter decrements the loaded MC value
by one after each polling cycle. In present invention, polling
cycles are not specific to any individual PHY device, but rather a
polling cycle is anytime a PHY device attached to the network
administration server is polled. After decrementing the stored MC
value, the PHY counter checks the updated value at step 306. If the
updated count value is not equal to zero, the process returns to
step 304 where the count value is again decremented. This loop will
continue until the count value at the PHY counter is equal to zero.
In the present example, the MC value for PHY device 108-1 was
initially one, indicating that after one polling cycle the PHY
counter will decrement the MC value to zero.
[0022] When the count value at the PHY counter is equal to zero,
the process continues to step 308. Here, the PHY counter places the
address of the PHY device to be polled into the polling queue. In
this example, PHY counter 206-1 places the network address of PHY
device 108-1 either onto the bus or into polling queue 208 if
pre-empted by another device being polled. If the address is
inserted into the polling queue, the PHY device will be scanned at
the earliest possible cycle.
[0023] Once the address of PHY device 108-1 reaches the top of the
polling queue, the process proceeds to step 310. Here, a new flow
begins indicating a new polling process. To initiate the polling
process, Polling Queue 208 places the address of PHY device 108-1
on Physical Address line 216a. This indicates to each of the PHY
devices (108-1 through 108-n) that PHY device 108-1 is next to be
polled. Polling Queue 208 also sends a signal to Response
Measurement Status register 212 to being monitoring CLAV line 216c
for a response from PHY device 108-1.
[0024] Once the Response Measurement Status register 212 has
received an indication from the polling queue that PHY device 108-1
is being polled, the process proceeds to step 312. Here, the
Response Measurement Status register monitors the CLAV line 216c
for a response from PHY device 108-1. Once PHY device 108-1
receives its address on Physical Address line 216a, it responds by
either setting CLAV to a one or to a zero. Once the Response
Measurement Status register 212 receives a signal, the process
splits into one of two possibilities, depending on the response. A
one on CLAV line 216c indicates a positive polling response from
PHY device 108-1. Conversely, a zero on CLAV line 216c indicates a
negative polling response from PHY device 108-1 (i.e., the incoming
packet buffer of the PHY device was full and the device was unable
to accept any additional packets). If the CLAV signal is one, the
process proceeds to step 314.
[0025] Once the process proceeds to step 314, PHY device 108-1 is
removed from the polling queue and an updated MC value is
calculated. The updated MC value is a function of the previous MC
value and the previous CLAV response for PHY device 108-1.
[0026] Once the updated MC value is determined from polling that
bypassed the promiscuous queue (whether the CLAV signal equaled
zero or one), the process proceeds to step 318. At step 318, the
updated MC value is passed to the latency register 204-1. Once
latency register 204-1 has the updated MC value, the process
returns to step 302 and the entire process repeats. By repeating
the process multiple times, a schedule can be determined that
optimizes the polling process to avoid missing times when an
individual device has available incoming buffer space.
[0027] FIG. 4 illustrates a timing diagram of the process described
in FIG. 3. The horizontal dotted line 402 indicates the level at
which the incoming packet buffer fills, resulting in a CLAV signal
of zero from a polled device. Line 404 (the sawtoothed shaped line)
indicates the current level of the incoming packet buffer of a PHY
device being polled. Signal 406 indicates the current level of the
CLAV signal, either one or zero.
[0028] At point 408, PHY device 108-1 is first polled. During this
polling, the incoming packet buffer reaches the point where the
CLAV signal is set to zero. At point 410, PHY device 108-1 is
polled again. Here, the CLAV signal is set to zero, indicating a
failed polling attempt. After the failed attempt, PHY device 108-1
is placed again into the polling queue as discussed above with
respect to FIG. 3. Between points 410 and 412, PHY device 108-1 is
not polled. During this time, the incoming packet buffer continues
to empty as PHY device 108-1 processes the packets stored in the
buffer. During this time, the CLAV signal is reset to one
indicating PHY device 108-1 capable of receiving incoming
packets.
[0029] At point 412, PHY device 108-1 is again polled. After this
polling, the CLAV signal remains set to one resulting in PHY device
108-1 being polled again. This continues through points 414, 416
and 418. During the polling at point 418, the incoming packet
buffer reaches its full point, resulting in the CLAV signal being
set to zero. At point 420, PHY device 108-1 is again polled,
responding with a CLAV signal set to zero indicating a failed
polling attempt. As before, PHY device 108-1 is returned to the
polling queue, and at point 422 the polling process repeats.
[0030] By analyzing the time between when the CLAV signal is set to
zero (labeled Tn and Tn+1 on the diagram), an updated MC value is
calculated. This updated MC value is indicative of the current
performance level of PHY device 108-1.
[0031] It should be clear to persons familiar with the related arts
that the process, procedures and/or steps of the invention
described herein can be performed by a programmed computing device
running software designed to cause the computing device to perform
the processes, procedures and/or steps described herein. These
processes, procedures and/or steps also could be performed by other
forms of circuitry including, but not limited to,
application-specific integrated circuits, logic circuits, and state
machines.
[0032] Having thus described a particular embodiment of the
invention, various alterations, modifications, and improvements
will readily occur to those skilled in the art. Such alterations,
modifications, and improvements as are made obvious by this
disclosure are intended to be part of this description though not
expressly stated herein, and are intended to be within the spirit
and scope of the invention. Accordingly, the foregoing description
is by way of example only, and not limiting. The invention is
limited only as defined in the following claims and equivalents
thereto.
* * * * *