U.S. patent application number 15/301602 was filed with the patent office on 2017-02-09 for data flow control method.
This patent application is currently assigned to Orbital Multi Media Holdings Corporation. The applicant listed for this patent is Orbital Multi Media Holdings Corporation. Invention is credited to Shuxun Cao, Manh Hung Peter Do.
Application Number | 20170041238 15/301602 |
Document ID | / |
Family ID | 50776796 |
Filed Date | 2017-02-09 |
United States Patent
Application |
20170041238 |
Kind Code |
A1 |
Do; Manh Hung Peter ; et
al. |
February 9, 2017 |
DATA FLOW CONTROL METHOD
Abstract
The present invention provides a data flow control method for
transmission of media data from a sending node to a receiving node,
the receiving node capable of playing said media data, over a
communication network, the method comprising identifying a
condition of the communication network between said sending and
receiving nodes, identifying a condition of the receiving node, and
adjusting the media data flow through said communication network
based on the identified condition of the communication network and
the identified condition of the receiving node.
Inventors: |
Do; Manh Hung Peter;
(Vancouver, CA) ; Cao; Shuxun; (Shenzhen,
CN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Orbital Multi Media Holdings Corporation |
Tortola |
|
VG |
|
|
Assignee: |
Orbital Multi Media Holdings
Corporation
Tortola
VG
|
Family ID: |
50776796 |
Appl. No.: |
15/301602 |
Filed: |
April 1, 2015 |
PCT Filed: |
April 1, 2015 |
PCT NO: |
PCT/GB2015/051028 |
371 Date: |
October 3, 2016 |
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04L 47/25 20130101;
H04L 47/12 20130101; H04L 47/30 20130101; H04L 1/0014 20130101;
H04L 43/0882 20130101; H04N 21/8543 20130101; H04N 21/6373
20130101; Y02D 30/50 20200801; H04L 1/0019 20130101; H04N 21/23439
20130101; H04L 1/0002 20130101; H04N 21/26258 20130101; H04L 65/80
20130101; H04L 65/4092 20130101; H04L 65/4069 20130101; H04L
47/2416 20130101; H04N 21/6581 20130101; Y02D 50/10 20180101; H04N
21/238 20130101; H04N 21/8456 20130101; H04L 45/24 20130101; H04N
21/44004 20130101; H04L 47/26 20130101; H04L 47/29 20130101 |
International
Class: |
H04L 12/825 20060101
H04L012/825; H04L 12/26 20060101 H04L012/26 |
Foreign Application Data
Date |
Code |
Application Number |
Apr 3, 2014 |
GB |
1406048.7 |
Claims
1. A data flow control method for transmission of media data from a
sending node to a receiving node, the receiving node capable of
playing said media data, over a communication network, the method
comprising identifying a condition of the communication network
between said sending and receiving nodes, identifying a condition
of the receiving node, and adjusting the media data flow through
said communication network based on the identified condition of the
communication network and the identified condition of the receiving
node.
2. The method of claim 1 wherein the sending node is configured for
encoding and streaming said media data to the receiving node based
on a request for such data from the receiving node, and the
receiving node is capable of decoding and playback of said media
data.
3. The method of claim 2 wherein the step of identifying the
condition of the network comprises detecting the level of network
traffic and determining whether the network between the sending
node and the receiving is in a normal state or in a congested
state, based on the detected level of network traffic; and wherein
the step of identifying the condition of the receiving node
comprises determining whether the data buffer at the receiving node
is at a safe level, unsafe level or critical level, the buffer
being 80% or more full in the safe level, 20%-80% full in the
unsafe level and 0%-20% full in the critical level; wherein said
network and receiving node conditions are periodically monitored
and communicated between the sending node and the receiving nodes
at defined intervals.
4. The method of claim 3 further comprising: responsive to a
request for media data from the receiving node, streaming the
requested media data at an initial data streaming rate; identifying
a maximum data streaming rate supported by the receiver node;
identifying the condition of the network; identifying the condition
of the receiving node; if the network condition is identified as
being normal and the condition of the buffer is at critical or
unsafe level, then continuously increasing the rate of data
streaming until said maximum rate is reached, or until the buffer
reaches the safe level or until the network condition becomes
congested.
5. The method as claimed in claim 4 wherein, if during the step of
continuously increasing the rate of data streaming, the buffer at
the receiving node reached the safe level, then the method
comprises adjusting the rate of data streaming to a rate that is
equal to a draining rate of the buffer during playback.
6. The method of claim 4 further wherein, if during the step of
continuously increasing the rate of data streaming, the network
condition changes to congested and remains as congested for a first
defined time period, the method comprises: identifying the
remaining playback time for the data left in the data buffer at the
receiving node; reducing the rate of data streaming to near zero or
a calculated low rate of streaming, or completely suspending
streaming of data from the server, until either the network
condition becomes normal or if the identified remaining playback
time reduces to 15 seconds or less.
7. The method as claimed in claim 6 wherein, if the remaining
playback time reduces to 15 seconds or less, the method comprises:
requesting the sending node to accept additional network
communication links between the sending node and the receiving
node; determing the total number of additional links required to
sustain real-time playback at the receiving node; establishing the
additional links by the sending node; streaming the media data from
the sending node across all established links evenly, such that if
the condition of the network is identified as congested on a first
link of the plurality of links, the media data is sent via the next
available communication link.
8. The method as claimed in claim 7 wherein the method further
comprises reordering of media data packets arriving at the
receiving node out of sequence by making use of the identifier of
the sequence in the header part of each media data frame.
9. The method as claimed in claim 7 wherein the method further
comprises: identifying one or more additional sending nodes that
are capable of streaming the requested media data to the receiving
node; establishing additional communication links to the receiving
node by each of the sending node such that each sending node is
capable of sending the media data event across the additional
links.
10. The method as claimed in claim 3 wherein, wherein the a
streaming application at the sending node is capable of adaptively
encoding the media data to be streamed from the sending node
according to a bit rate suitable for the identified buffer
conditions of the buffer of the receiving node.
11. A data flow control method for transmission of media data from
a sending node to a receiving node, the receiving node capable of
playing said media data, over a communication network, the method
comprising: responsive to a request for media data requested by the
receiving node, determining if a copy of said media data is stored
locally at the receiving node or stored on a memory device that is
accessible to said receiving node; wherein if a local copy of the
entire data file or parts of the data request is available is
stored locally, accessing this copy and requesting streaming of
only the missing parts from the sending node.
12. The method as claimed in claim 11 further comprising, if a
local copy of the requested media data is not available at the
receiving node, the method comprises the steps of: identifying the
conditions of the receiving node, including the screen size,
resolution and capability of the display screen connected to said
node; selecting a bitrate fore streaming the media data according
to the identified screen size, video resolution and capability
supported by the display screen; streaming the requested media data
from the sending node using the selected bitrate or a higher
bitrate at the outset of said streaming instead of commencing said
streaming at the lowest available bitrate.
13. The data flow control method as claimed in claim 3 wherein, if
the network condition is identified as being congested, then
continuing said streaming at the current streaming rate by only
streaming I- frames of the media data packet and not streaming B
and P frames of said media data to the receiving node, until the
network condition changes to normal, to ensure that the media data
is continuously streamed for playback at the receiving node.
14. The data flow control method as claimed in claim 3 wherein,
when the buffer is at a safe level, the method comprises:
identifying previously streamed segments stored in the data buffer
having low video bitrates or partial GOPs in the buffer queue;
identifying the remaining playback time for the data left in the
data buffer at the receiving node; if the remaining playback time
is more than 10 seconds, then identifying the current rate of
streaming of media data from the sending node; if the rate of
current streaming is more than an average rate supported by the
receiving node, then the method further comprises requesting the
sending node to resend the existing frames with low video bit rates
or partial GOPs with higher video bitrates.
15. A data flow control method for transmission of media data from
a sending node to a receiving node, the receiving node capable of
playing said media data, over a communication network, the method
comprising: responsive to a request for media data requested by the
receiving node, identifying a plurality of intermediate data giver
nodes, each storing a local copy of the requested media data; if a
data giver node that is identified as a neighbor of the receiving
node is one of the identified intermediate nodes, then obtaining
the copy of the media data from this neighbor data giver node, said
neighbor node being a peer node of said receiving node; if no data
giver is identified as being a neighbor of the receiving node is
identified, then streaming the requested media data from the
sending node.
16. A data flow control method for transmission of media data from
a sending node to a receiving node, the receiving node capable of
playing said media data, over a communication network, the method
comprising: responsive to a request for media data requested by the
receiving node, streaming the media data from the sending node at
the currently available bitrate for a defined time period to detect
the network bandwidth; if said bandwidth is capable of supporting a
higher video bitrate when compared to the current rate, the network
is compared to switch to said higher video bitrate for continuous
streaming.
17. A data flow control method for transmission of media data from
a sending node to a receiving node, the receiving node capable of
playing said media data, over a communication network, the method
comprising: responsive to a request for media data requested by the
receiving node, streaming the media data from the sending node at
the currently available bitrate for a defined time period to detect
the network bandwidth; identifying a plurality of group of pictures
(GOP) for high motion video data that is to be streamed and
inspecting the size and average bit rate for each GOP and the
network conditions prior to said streaming; if the average bit rate
of a GOP is 30% less than the average bit rate of the currently
streamed media, then the method comprises identifying said GOP as a
low motion picture GOP and switching the current streaming bitrate
to a lower bitrate for streaming the GOP at the current bit rate;
if the average bitrate of a GOP is 30% more than the average bit
rate, then the method comprises identifying said GOP as a high
motion picture GOP, and switching the current streaming bitrate to
the highest available bitrate for streaming the GOP at the highest
bit rate.
18. The data flow control according to claim 1 wherein the sending
node is an IPTV streaming server and the receiving node is a client
device including a multimedia player.
19. A system for implementing the method as claimed in claim 1
comprising a sending node and a receiving node capable of
communication via a communication network, the sending node having
a streaming module capable of streaming multimedia data stored in a
memory means of the sending node, and the receiving node capable of
requesting a multimedia data to be streamed from the sending node
for playback on a multimedia player incorporated in the receiving
node.
Description
1. FIELD OF THE INVENTION
[0001] The present application relates to data transmission
protocols for the transfer of media data from a streaming server to
one or more clients. More particularly the present invention
provides an enhanced data flow control method that can be used in
conjunction with an existing protocol such as TCP/IP. The data flow
control method according to the present invention takes into
consideration network conditions as well as a receiving node or
client device conditions , such as the data buffer of the client
player, to improve the speed and quality of media data transmission
for Internet protocol television (I PTV) applications.
2. BACKGROUND
[0002] Video traffic is currently accountable for over 60% of the
world's bandwidth usage over communication networks such as the
Internet or any similar wireless communication network today such
as LANs, WLANs etc. How such data is injected into a network has a
strong influence on the overall data flow through the network.
Uncontrolled data injection into a network can lead to congestion
impacts such as slow overall traffic flow, packet delay, packet
loss, packet out of order, packet re-transmission,
flooding/crashing of network devices (routers, switches etc.), and
flooding of uncontrollable traffic. These types of events cause
network traffic to slow down and sometimes to come to a complete
stop if the switching & routing network equipment in use is
unable to cope with the flow demand. Additionally, unmanaged data
injection will have a negative impact for applications that rely on
real-time communication such as VoIP (Voice over IP), live
broadcasts of media events, real-time video conferences and other
time-sensitive applications.
[0003] The Transmission Control Protocol (TCP) is one of the core
protocols of the Internet protocol suite (IP), i.e. the set of
network protocols used for the Internet. TCP provides reliable,
ordered, error-checked delivery of a stream of octets between
programs running on computers connected to a local area network
(LAN0, intranet or the Internet. It resides at the transport layer.
Internet Protocol television (IPTV) is a system through which
television services are delivered using the Internet protocol suite
over a packet-switched network such as a LAN or the Internet,
instead of being delivered through traditional terrestrial,
satellite signal, and cable television formats. TCP is the most
commonly used protocol on the Internet. The reason for this is
because TCP offers error corrections. When the TCP protocol is used
there is a "guaranteed delivery". This is due to a method called
"flow control" in TCP. Flow control determines when data needs to
be re-sent, and stops the flow of data until previous packets are
successfully transferred. This works because if a packet of data is
sent, a collision may occur. When this happens, a receiving client
system or end-point can re-request the packet from a server
transmitting data until the whole packet is complete and is
identical to the original packet that was transmitted. Thus, TCP is
an advanced transport protocol with 100% success rate on data
delivery, built in flow control and error corrections, which run
effectively over unmanaged networks. The use of TCP is currently
required for all Open Network IPTV deployments where one or more
network segments are not managed by the IPTV service operator.
[0004] However, adopting TCP in an IPTV Streaming application has
many drawbacks and can cause network traffic issues due to the
structure of this protocol. Standard TCP involves large overheads
in data transmission due to its default data frame structure. The
header refers to the first part of a data cell or packet,
containing information such as source and destination addresses and
instructions on how the telecommunications network is to handle the
data. The header is part of the overhead in a data transmission
protocol. For typical TCP/IP transmissions, i.e. most Internet
traffic, the header is usually 40 bytes of each packet (20-byte TCP
and 20-byte IP headers). TCP and IP headers can be larger than 20
bytes if "options" are enabled in the data transmitted. Internet
Control Message Protocol (ICMP), i.e. the protocol used for sending
test and control messages, have headers that are 28 bytes. This
overhead due to the headers can impact IPTV user experience,
especially when the network conditions are abnormal, i.e. congested
due to heavy traffic flow. TCP does not offer the ability to cut
off the transmission flow to improve network congestion. Further
TCP is incapable of managing bandwidth sending rates to an IPTV
client player without creating unnecessary data waste.
[0005] The User Datagram Protocol (UDP) is also one of the core
members of the Internet protocol suite. With UDP, computer
applications can send messages, in this case referred to as
datagrams, to other hosts on an Internet Protocol (IP) network
without prior communications to set up special transmission
channels or data paths. UDP uses a simple transmission model with
minimum protocol mechanisms. It has no handshaking dialogues, and
thus exposes any unreliability of the underlying network protocol
to the user's program. UDP provides checksums for data integrity
and port numbers for addressing different functions at the source
and destination of the datagram. However, in UPD there is no
guarantee of delivery, ordering, or duplicate protection.
[0006] UDP is suitable for purposes where error checking and
correction is either not necessary or is performed at the
application prior to transmission, avoiding the overhead of such
processing at the network interface level. Time-sensitive
applications often use UDP because dropping packets is preferable
to waiting for delayed packets, which is not a viable option in a
real-time system. If error correction facilities are needed at the
network interface level, an application residing on a host or a
system for transmitting such data will need to make use of the
Transmission Control Protocol (TCP) or Stream Control Transmission
Protocol (SCTP), which are designed for this purpose.
[0007] UDP has some unique advantages over TCP but also has
drawbacks as well. For instance, UDP is required have when the
transmission requirements combine methods of unicast and multicast.
The use of Multicast allows occupation of the available bandwidth
at fixed data rates, without facing user growth capacity issue.
However, UDP cannot be used to send important data such as
webpages, database information, etc., and its present use is mostly
limited to streaming audio and video. UDP can offer speed and is
faster for data transmissions when compared to TCP because there is
no form of flow control or error correction in UDP. Therefore the
data sent over the Internet using UDP is affected by collisions,
and errors will be present. Therefore UDP is only recommended for
streaming media over a managed network, i.e. a network where the
quality of service (QoS) is managed by the service provider, or
when data loss is not a concerning factor for the transmission. Due
to its simplicity and light weight design, UDP is an ideal
transport protocol when transmitting data over QoS managed networks
where packet collision is unlikely to occur. UDP offers fast data
injection, lower packet overhead and faster respond time compared
to TCP. For I PTV, the above benefits can improve TV-like
experience, especially fast channel switching, immediate movie
play-back etc. and also reduce stress on the servers and network
devices. However, with UDP, traffic collision and packet loss
inevitable as this protocol does not have any built in flow control
mechanism.
[0008] The Nagle algorithm, named after John Nagle, proposes
improving the efficiency of TCP/IP networks by reducing the number
of packets that need to be sent over the network. In this
technique, in congestion control in IP/TCP Internetworks (RFC 896)
a "small packet problem" is described where an application
repeatedly emits data in small chunks, frequently only 1 byte in
size. Since TCP packets have a 40 byte headers (20 bytes for TCP,
20 bytes for IPv4), this results in a 41 byte packet for only 1
byte of useful information, which is a huge overhead. This
situation often occurs in Telnet sessions, where most key presses
generate a single byte of data that is transmitted immediately.
Over slow network links, many such packets can be in transit at the
same time, potentially leading to congestion collapse. Nagle's
algorithm works by combining a number of small outgoing messages,
and sending them all at once. Specifically, a the sender system or
application should keep buffering its output until it has a full
packet's worth of output, so that output can be sent all at once.
This existing technique making use of Nagle algorithm is explained
below.
[0009] A. For any TCP connection, there is at most one small packet
that is not acknowledged by the receiver application or device.
Unless this is acknowledged, the sender does not transmit any other
small packet (having very few data bytes of useful
information).
[0010] B. TCP collects these small packets and sends them out at
once as one whole packet only after such acknowledgement is
received. Therefore, as more acknowledgements arrive, more data
packets are sent. On either WAN/MAN/LAN, The round trip time (RTT)
value for a TCP connection normally ranges from 100 ms to 300 ms.
This delay allows TCP to have enough time to collect small packets
before next acknowledgement arrives.
[0011] Though the use of TCP with the Nagle algorithm benefits some
types of data communications and transfers using TCP, this benefit
does not extend to IPTV data and services where a multiple of a 300
ms delay could be crucial in a determination of good or bad user
experience. Such delays are unacceptable for many applications.
[0012] Therefore, there exists a need for a new method or protocol
for data packet transmission over a communication network that
overcomes the drawbacks of TCP and UDP and provides speed, flow
control and error correction mechanisms, with minimal network
traffic overheads.
3. SUMMARY OF THE INVENTION
[0013] In one aspect, the present invention provides a data flow
control method for transmission of media data from a sending node
to a receiving node, the receiving node capable of playing said
media data, over a communication network, the method comprising
identifying a condition of the communication network between said
sending and receiving nodes, identifying a condition of the
receiving node, and adjusting the media data flow through said
communication network based on the identified condition of the
communication network and the identified condition of the receiving
node.
[0014] In a further aspect the sending node is configured for
encoding and streaming said media data to the receiving node based
on a request for such data from the receiving node, and the
receiving node is capable of decoding and playback of said media
data.
[0015] In a further aspect the step of identifying the condition of
the network comprises detecting the level of network traffic and
determining whether the network between the sending node and the
receiving is in a normal state or in a congested state, based on
the detected level of network traffic; and [0016] wherein the step
of identifying the condition of the receiving node comprises
determining whether the data buffer at the receiving node is at a
safe level, unsafe level or critical level, the buffer being 80% or
more full in the safe level, 20%-80% full in the unsafe level and
0%-20% full in the critical level; wherein said network and
receiving node conditions are periodically monitored and
communicated between the sending node and the receiving nodes at
defined intervals.
[0017] In a further aspect, responsive to a request for media data
from the receiving node, the present invention comprises [0018]
streaming the requested media data at an initial data streaming
rate; [0019] identifying a maximum data streaming rate supported by
the receiver node; [0020] identifying the condition of the network;
[0021] identifying the condition of the receiving node [0022] if
the network condition is identified as being normal and the
condition of the buffer is at critical or unsafe level, then
continuously increasing the rate of data streaming until said
maximum rate is reached, or until the buffer reaches the safe level
or until the network condition becomes congested.
[0023] In a further aspect, if during the step of continuously
increasing the rate of data streaming, the buffer at the receiving
node reached the safe level, then the method comprises adjusting
the rate of data streaming to a rate that is equal to a draining
rate of the buffer during playback.
[0024] In a further aspect, if during the step of continuously
increasing the rate of data streaming, the network condition
changes to "congested" and remains as congested for a first defined
time period, the method comprises: [0025] identifying the remaining
playback time for the data left in the data buffer at the receiving
node; [0026] reducing the rate of data streaming to near zero or a
calculated low rate of streaming, or completely suspending
streaming of data from the server, until either the network
condition becomes normal or if the identified remaining playback
time reduces to 15 seconds or less.
[0027] In a further aspect, if the remaining playback time reduces
to 15 seconds or less, the method comprises: [0028] requesting the
sending node to accept additional network communication links
between the sending node and the receiving node; [0029] determining
the total number of additional links required to sustain real-time
playback at the receiving node; [0030] establishing the additional
links by the sending node; [0031] streaming the media data from the
sending node across all established links evenly, such that if the
condition of the network is identified as congested on a first link
of the plurality of links, the media data is sent via the next
available communication link.
[0032] In a further aspect the method comprises reordering of media
data packets arriving at the receiving node out of sequence by
making use of the identifier of the sequence in the header part of
each media data frame.
[0033] In a further aspect the method further comprises: [0034]
identifying one or more additional sending nodes that are capable
of streaming the requested media data to the receiving node; [0035]
establishing additional communication links to the receiving node
by each of the sending node such that each sending node is capable
of sending the media data event across the additional links.
[0036] In a further aspect a streaming application at the sending
node is capable of adaptively encoding the media data to be
streamed from the sending node according to a bit rate suitable for
the identified buffer conditions of the buffer of the receiving
node.
[0037] In another aspect, the present invention provides a data
flow control method for transmission of media data from a sending
node to a receiving node, the receiving node capable of playing
said media data, over a communication network, the method
comprising: [0038] responsive to a request for media data requested
by the receiving node, determining if a copy of said media data is
stored locally at the receiving node or stored on a memory device
that is accessible to said receiving node; wherein if a local copy
of the entire data file or parts of the data request is available
is stored locally, accessing this copy and requesting streaming of
only the missing parts from the sending node.
[0039] In a further aspect, if a local copy of the requested media
data is not available at the receiving node, the method comprises
the steps of:
[0040] identifying the conditions of the receiving node, including
the screen size, resolution and capability of the display screen
connected to said node;
[0041] selecting a bitrate fore streaming the media data according
to the identified screen size, video resolution and capability
supported by the display screen;
[0042] streaming the requested media data from the sending node
using the selected bitrate or a higher bitrate at the outset of
said streaming instead of commencing said streaming at the lowest
available bitrate.
[0043] In a further aspect if the network condition is identified
as being congested, then continuing said streaming at the current
streaming rate by only streaming I-frames of the media data packet
and not streaming B and P frames of said media data to the
receiving node, until the network condition changes to normal, to
ensure that the media data is continuously streamed for playback at
the receiving node.
[0044] In a further aspect when the buffer is at a safe level, the
method comprises: [0045] identifying previously streamed segments
stored in the data buffer having low video bitrates or partial GOPs
in the buffer queue; [0046] identifying the remaining playback time
for the data left in the data buffer at the receiving node; [0047]
if the remaining playback time is more than 10 seconds, the
identifying the current rate of streaming of media data from the
sending node; [0048] if the rate of current streaming is more than
an average rate supported by the receiving node, then the method
further comprises requesting the sending node to resend the
existing frames with low video bit rates or partial GOPs with
higher video bitrates.
[0049] In another aspect, the present invention provides a data
flow control method for transmission of media data from a sending
node to a receiving node, the receiving node capable of playing
said media data, over a communication network, the method
comprising: [0050] responsive to a request for media data requested
by the receiving node, identifying a plurality of intermediate data
giver nodes, each storing a local copy of the requested media data;
[0051] if a data giver node that is identified as a neighbour of
the receiving node is one of the identified intermediate nodes ,
then obtaining the copy of the media data from this neighbour data
giver node, said neighbour node being a peer node of said receiving
node; [0052] if no data giver is identified as being a neighbour of
the receiving node is identified, then streaming the requested
media data from the sending node.
[0053] In another aspect, the present invention provides a data
flow control method for transmission of media data from a sending
node to a receiving node, the receiving node capable of playing
said media data, over a communication network, the method
comprising: [0054] responsive to a request for media data requested
by the receiving node, streaming the media data from the sending
node at the currently available bitrate for a defined time period
to detect the network bandwidth; [0055] if said bandwidth is
capable of supporting a higher video bitrate when compared to the
current rate, the network is compared to switch to said higher
video bitrate for continuous streaming.
[0056] In another aspect, the present invention provides a data
flow control method for transmission of media data from a sending
node to a receiving node, the receiving node capable of playing
said media data, over a communication network, the method
comprising: [0057] responsive to a request for media data requested
by the receiving node, streaming the media data from the sending
node at the currently available bitrate for a defined time period
to detect the network bandwidth; [0058] identifying a plurality of
group of pictures (GOP) for high motion video data that is to be
streamed and inspecting the size and average bit rate for each GOP
and the network conditions prior to said streaming; [0059] if the
average bit rate of a GOP is 30% less than the average bit rate of
the currently streamed media, then the method comprises identifying
said GOP as a low motion picture GOP and switching the current
streaming bitrate to a lower bitrate for streaming the GOP at the
current bit rate; [0060] if the average bitrate of a GOP is 30%
more than the average bit rate, then the method comprises
identifying said GOP as a high motion picture GOP, and switching
the current streaming bitrate to the highest available bitrate for
streaming the GOP at the highest bit rate.
[0061] In a further aspect the sending node is an IPTV streaming
server and the receiving node is a client device including a
multimedia player.
[0062] In another aspect, the present invention provides a system
for implementing the method as claimed in any one of the preceding
claims comprising a sending node and a receiving node capable of
communication via a communication network, the sending node having
a streaming module capable of streaming multimedia data stored in a
memory means of the sending node, and the receiving node capable of
requesting a multimedia data to be streamed from the sending node
for playback on a multimedia player incorporated in the receiving
node.
4. BRIEF DESCRIPTION OF THE DRAWINGS
[0063] FIG. 1 and FIG. 2 show the frame structures for TCP and UDP,
respectively.
[0064] FIG. 3 shows a flow chart depicting an exponential speed up
mode for the data flow control method according to a first
embodiment.
[0065] FIG. 4 shows a flow chart depicting an exponential back off
mode for data flow control method according to the first
embodiment.
[0066] FIG. 5 shows a flow chart depicting a linear trickle off
mode for the data flow control method according to the first
embodiment.
[0067] FIG. 6 shows a method of bitrate selection for a data
sharing mode of the data flow control method according to a second
embodiment.
[0068] FIG. 7 shows a method of adaptive bitrate selection for high
quality video data playback for the data flow control method
according to a third embodiment.
[0069] FIG. 8 shows a method of bitrate selection for based on
resolution for the data flow control method according to the third
embodiment.
[0070] FIG. 9 shows a flow chart depicting a method for a selective
frame drop for the data flow control method according to the third
embodiment.
[0071] FIGS. 10a and 10b show charts depicting viewing experience
with and without the selective frame drop of FIG. 9,
respectively.
[0072] FIG. 11 shows a flow chart depicting a method for allocation
of bandwidth for high motion video frames for the data flow control
method according to the third embodiment.
[0073] FIG. 12 shows a flow chart depicting a buffer repair mode
for the data flow control method according to the third
embodiment.
[0074] FIG. 13 shows a flow chart depicting the interaction between
modes of the first, second and third embodiments.
[0075] FIG. 14 shows a flow chart depicting a method for adaptively
enabling or disabling the Nagle algorithm according to the present
invention.
[0076] FIGS. 15a and 15b show a table and graph depicting the
performance test results with and without the use of the method of
FIG. 14, respectively.
5. DETAILED DESCRIPTION OF THE EMBODIMENTS
[0077] As data moves along a network, various attributes are added
to the data file to create a frame. This process is called
encapsulation. There are different methods of encapsulation
depending on which protocol and topology is being used. As a
result, the frame structures of data frames differ. FIG. 1
illustrates a TCP frame structure and FIG. 2 illustrates a UDP
frame structure. The payload field in the shown frames contains the
actual data. TCP has a more complex frame structure that UDP. This
is largely due to TCP being a reliable connection-oriented
protocol, as explained in the background section. The additional
fields shown in FIG. 1 (when compared to the UDP fame shown in FIG.
2) are those needed to ensure the "guaranteed delivery" offered by
TCP. Therefore TCP is a much slower data transmission protocol when
compare to UDP, and with much larger overheads. This is especially
so if TCP is combined with the use of the Nagle algorithm described
in the background section.
[0078] The present invention provides a new data transmission
protocol or data flow control method for use in the Internet
protocol suite. Particularly, the present invention provides a
plurality of flow mechanisms or modes for media data packet
transmission, preferably video data transmission over a
communication network that overcomes the drawbacks of TCP and UDP
and provides speed, flow control and error correction mechanisms,
with minimal network traffic overheads.
[0079] In one aspect, the present invention provides a data flow
control method that handles data flow management on the application
layer of the OSI model. Though the present invention is concerned
with media data and specifically video data for IPTV services, a
skilled person would easily understand that the present invention
can be used for managing the flow of any type of data and
information that can be transported over communication network such
as the Internet.
[0080] The data flow control method according to a first embodiment
of the present invention is based on monitoring one or more sending
node or server side conditions (for instance, an IPTV provider's
server for sending the data) as well as one or more receiving node
or client side conditions (client device such as a player or a
set-top box for receiving the data). The present invention
facilitates communication for information and data exchange between
the sending server & receiving client for communicating local
network conditions at each end. Based on the conditions detected
from both the client device and the server device, the method of
data flow control according to the first embodiment is able to
calculate and predict the network environment.
[0081] Upon a network condition being detected or notified to
either the server or the client, the flow control method according
to the present invention is capable of applying one or more data
transmission modes or techniques (these modes are explained in
detail below) to ensure that high quality video data can be
streamed over unmanaged and/or fluctuated networks.
[0082] In another aspect, the data flow control method of the
present invention is capable of consuming unused bandwidth (left
over or wasted bandwidth) in the network for more efficient data
transmissions by data sharing, local caching and data
recycling.
[0083] In a further aspect, the data flow control of the present
invention provided high video quality delivery and maintains
smoothness of video playback on any network.
[0084] The data flow control method or protocol incorporates a
combination of RTSP (Real-time Streaming Protocol) encapsulated
over HTTP (Hypertext Transfer Protocol). The data flow control
method according to the present invention is handled in the
application layer. The method is capable of implementing one or
more modules which reside on either the server side or the client
side terminals, or both. The client and server nodes, equipped with
the modules for implementing such flow controls constantly work
together in collaboration to predict the network flow, adjust data
flow, enhance video quality, navigate through various network
routes to maintain a good IPTV user experience that conventional
data transmission protocols such as TCP and UDP cannot offer.
[0085] Balancing between fast responses, smooth data flow and
quality of data are some of the objectives of the present
application. A summary of some of the advantages of the data flow
control method according to the present invention is provided
below. It is not essential that all of these advantages are
achieved for a single transmission, as for a particular
transmission one effect may be more important that other
advantageous effects.
[0086] A. Send media data to the end users as fast as possible to
ensure the buffer at the user device, i.e. the client system stays
full and maintains smooth playback.
[0087] B. Detect congestion ahead of TCP and back-off (reduce or
stop sending packets) immediately (not gradually like TCP), which
will help to ease off congestion rapidly.
[0088] C. Efficient use of bandwidth by using a dynamic multiple
link strategy (multiple network paths & routes) prior to
establishing the session.
[0089] D. Detect and differentiate between physical network
congestion vs. normal network congestion and apply a suitable
control mode avoid self-competing.
[0090] E. Support adaptive bitrate streaming based on the network
conditions.
[0091] F. Provide a high video experience by utilizing the network
channel fully and giving priority to high bit rate video GOP (Group
of pictures)
[0092] G. Provide buffer repair, such that when the client buffer
is at a healthy stage, the data flow control method is configured
to re-evaluate the video bitrates on the buffer and replace
lower/poor quality segments with higher video quality. Such repair
takes place safely and effectively only when network condition
permits.
[0093] H. The flow control method is configured for recycling data
by caching popular data on local storage devices to prevent
repeated streaming from the server and is also configured to and
also share locally cached data with peers.
[0094] I. Switch to P2P style of communication when condition
permits. This is mainly used in VOD and Replay-TV scenarios
[0095] J. Co-exist with other kinds of service data flow, such as
VoIP peacefully, i.e. with no packet collisions.
[0096] The application layer data flow control method according to
the present invention comprises data flow control methods and video
quality control methods.
[0097] According to a first embodiment of the present invention,
data flow control methods or modes that are applied based on
network conditions and buffer conditions are:
[0098] A. Exponential Speedup (sending data on an increasing rate
manner)
[0099] B. Exponential Back Off (reduce data sending rate to near
zero)
[0100] C. Linear or smooth Trickle Mode (sending data rate equal to
the video playing rate)
[0101] D. Dynamic Multiple links (sending data in multiple TCP
connections)
[0102] E. Adaptive Streaming considering network and player buffer
conditions.
[0103] According to a second embodiment of the present invention,
data flow control methods to achieve data sharing to improve
overall network and streaming efficiency and reducing network
resources are:
[0104] A. Data Recycling (Preserve and reuse cached data whenever
possible)
[0105] B. Hybrid point-to-point (P2P) Streaming (Receive and share
data with other clients on a controlled manner when the condition
allows)
[0106] According to a third embodiment of the present invention,
video quality flow control methods are:
[0107] A. Adaptive Bitrate Streaming based on a quality greedy
proviso (ensure high quality video data are sent with highest
priority).
[0108] B. Smart start video bit rate selection (dynamically select
video best bitrates based on device resolution to improve user
experience)
[0109] C. Video frame selective drop or frame bypass (maintaining
video continuity by ignoring Non-I frame until network condition
improved)
[0110] D. Motion Picture First (Allocation of bandwidth for High
Motion video frames)
[0111] E. Low video quality buffer repair/replacement of poor video
with higher bitrates (If the right condition occurs, go back to the
buffer replace previous un-played low quality video (GOP) with
higher video quality)
5.1 First Embodiment
[0112] Data flow Control mechanisms based on network conditions as
well as client or receiving node's buffer conditions.
[0113] Though TCP is adequate and reliable for video streaming, a
good IPTV experience is one that is comparable with traditional
Digital Cable TV, Satellite TV and Terrestrial TV. The expectations
are good video quality, fast channel changing, immediate video
acquisition and continuous streaming. In order to achieve this
level, the present invention proposes a plurality of data flow
control mechanism that can work in conjunction with TCP, the public
network and navigate around the congested network segments. The
following mechanisms or data flow control modes are different from
the techniques applied by traditional TCP or UDP because they are
based on a collaboration of network conditions when the data is
streamed from a sending node as well as the conditions of the
player or client buffer. Previous and existing systems do not have
this collaboration and are reliant reporting of anomalies in the
network. In the present invention, network conditions and buffer
conditions can be obtained from the server (the sending node--this
need not be the only or original source of the data and may also be
an intermediate node storing the data file) or the client or end
user receiving node/player, or by both nodes making use of
information exchanges between them.
[0114] 5.1.1 Exponential Speed Up (Push Forward as Fast as
Possible):
[0115] The exponential speed up data flow control mechanism of the
first embodiment is shown in FIG. 3. The preferred steps of this
mechanism are explained below:
[0116] 3a. Set an initial sending rate, init_rate to 1.6.times.
`real-time playback` bit rate of the VOD (video on demand) file,
noted by tbitrate.
[0117] 3b. Set a maximum push rate, max_rate, to 0.625.times. (
1/1.6) downlink bandwidth reported from player.
[0118] 3c. If max_rate is less than init_rate, then the flow
control method sets the max_rate to init_rate.
[0119] 3d. Try to push forward media data at init_rate. If network
is normal (no congestion), then try to push at speed 1.6.times.
current speed, i.e. 1.6*init_rate which equals to
1.6*1.6*rt_bitrate.
[0120] 3e. Continue to increase push speed in an exponential manner
until cache buffer on player side is 80%, the max_rate is reached
or network becomes congested.
[0121] The above steps 3a-3e set out the main features of the
exponential speed-up data flow control mode. The following steps
explain mechanisms employed based on additional abnormal buffer
conditions and network conditions and sets out the procedure for
achieving efficient data flow following exponential speed up mode
by interacting with other dataflow control mechanisms of the first
embodiment.
[0122] 3f. If cache level is more than 95%, then turn into
exponential back off process immediately (this is explained in
5.1.2 below).
[0123] 3g. Else if cache buffer is more than 80%, turn into Linear
Trickle Mode (see 5.1.3 below) to push forward media data at
1.times.rt_bitrate.
[0124] 3h. If max_rate is reached, keep such rate until cache
buffer is 80% or network becomes congestion.
[0125] 3i. When network becomes congested, if cache buffer reaches
a critical level, the data flow control method then proceeds to
Dynamic Multilink Process (see 5.1.4). Otherwise, the push
(streaming) speed can be decreased to 0.625.times.( 1/1.6.times.)
of current speed. If the network congestion prevails, the push
speed may be decreased to 0.625.times.( 1/1.6.times.) current speed
but no less than lx tbitrate. When there is buffering on player
side i.e. the client device, multi-path/concurrent multiple routes
process (see item 5.1.5) can be initiated.
[0126] 3j. When network recovers from congestion, the data flow
control mechanism attempts to send at the last push speed and then
proceeds to repeat step 3e-3j above.
[0127] 3k. After the flow control method returns from exponential
back off (see 5.1.2), steps 3d-3j above can be repeated.
[0128] 5.1.2 Exponential Back Off
[0129] This back off mechanism of the flow control method of the
first embodiment can be triggered upon detection of
congestions/conditions of network or player buffer that matches a
pre-set back-off criteria. The best solution to ease off congestion
for IPTV packet data transmissions is to back-off or navigate using
one or more different paths to avoid contributing to the existing
network congestion and traffic. The exponential back off mode or
mechanism of the data flow control method according to the present
invention (also referred to as a friendly back-off mode) is
triggered when congestion is detected. This mode will suspend all
other flow control modes and reduce data sending rates to near zero
or a "0.05.times.rt_birate" to yield bandwidth to other
applications.
[0130] The exponential back off data flow control mechanism of the
first embodiment is shown in FIG. 4. The preferred steps of this
mechanism are explained below:
[0131] 4a. During the playback session, the server or a system
having a streaming application or module will try to push forward
media data according to the `Exponential Speed Up` mechanism set
out in item 5.1.1.
[0132] 4b. During the play back session, the client device or
player will periodically report to the streaming application it's
cached/buffer media data size and `real-time play back` duration,
i.e. the amount of playing time left in the buffer. The update is
sent every 2 seconds to 5 seconds, depending on the RTP/RTCP (Real
time control protocol) calculation used. The server or streaming
application can record this information for later use.
[0133] 4c. If continuous network congestion is detected in the
network path by the exponential speed up mechanism, the streaming
server will stop sending data to the TCP layer for a time span,
which for instance equals to 1/3 of the time (Real-Time Play Back)
reported from player in step 4b.
[0134] The following steps are provided to show the working of this
mechanism in combination with the other mechanism and data flow
control modes of the present application.
[0135] 4d. After the delayed time in step c expires; the streaming
application or module will try to push data at last push speed
recorded by Exponential Speed Up mechanism (5.1.1 above) to
compensate earlier delay loss in step 4c.
[0136] 4e. If step d fails due to network throughput, congestion or
if player is not receiving all packets within a normal timeline,
the streaming application will stop sending and recompile a new
calculation based on the new `Real-Time Play Back` reported from
player. The process will continue to yield or free up network
bandwidth until the `Real-Time Play Back` is less than 15 seconds
(or a defined critical level) or if network condition becomes
normal, i.e. there is no congestions and the transmission occurs
within a predicted or expected time and at an expected QoS
level.
[0137] 4f. If the cache data left behind in the player is less than
15 seconds (critical level) because of our back off procedure,
`Dynamic Multiple Link` mechanism set out in 5.1.4 can be applied
to compensate earlier delay/loss in a quick and efficient
manner.
[0138] 4g. If network is resumed to normal conditions, the
exponential back off mode can be exited and the exponential speed
up mode 5.1.1 can be resumed.
[0139] 5.1.3 Linear Trickle Mode
[0140] The linear or smooth trickle mode of the data flow control
method according to the first embodiment is triggered or applied
when the cache buffer on player side or client terminal is at the
safe level (80% or more of the buffer). The IPTV streaming module
or application at the server node will enter the linear trickle
mode at the safe level. In this mechanism, the streaming
application will send media data at 1.times.rt_bitrate speed, which
is equal to the draining speed of the cache buffer on player side
when the data from the buffer is being used. This ensures that the
buffer may be maintained at a safe level, i.e. 80%, to ensure
smooth video data playback.
[0141] A flowchart depicting the linear trickle mode is seen in
FIG. 5. In this figure, the data flow is initially shown to be in
the exponential speed up mode in 5a. At step 5b, it is determined
if the data cache level is more than 95% and if so, the exponential
back-off mode is initiated in step 5c (see 5.1.2). If the cache
level is determined to be 80% at step 5d, then at 5f the linear
trickle mode is initialled. This determination at Step 5d can also
be made after checking the network conditions in step 5e, as shown
in FIG. 5.
[0142] 5.1.4 Dynamic Multiple Link Mechanism
[0143] Traditional and existing data transmissions are established
as a unicast session between one client and one server over one TCP
link. When this path between the client and server is blocked or
congested, conventional technology will start to buffer data or to
give up completely. Furthermore, even if the data is acquired from
multiple sources via multiple TCP links, the following issues are
encountered:
[0144] A. Packet re-ordering where one piece of data arrives out of
sequence and must be discarded. This effect multiplies by the total
of TCP links in used and the problem gets worse.
[0145] B. Packets are delayed and are ordered in the wrong sequence
between multiple sources
[0146] C. Data may be not continuous after reassembly at the player
side.
[0147] D. Preventing duplicate of data transmission and
reassembling packets acquired via multiple sources.
[0148] The use of dynamic multiple links (DML) as a connection
management mechanism within the data flow control method of the
first embodiment is for establishing and maintaining a plurality of
connections between the server side system and a client player or
system based on network conditions as well as conditions in the
client environment. The DML mechanism dynamically establishes
multiple TCP connections between the server and the client to
achieve higher network throughput. At the same time, the mechanism
also utilises these multiple network paths to help ease off
congestion and allow the player continuously maintain the video
session without interruptions. This is useful when data is urgently
needed to prevent video buffering effect for IPTV. This DML
mechanism is based on information exchange and cooperation work
between the client side and server such that when data is required
urgently, a module (this may be a dedicated DML module or
integrated with other devices) is capable of computing the total
connections needed and to request the server side to accept new
connections. The server side determines how many connections it
will use depending on other factors and network conditions.
[0149] During a streaming session, the streaming application in the
server will try to send data across all the available TCP links in
an average manner, i.e. evenly. When one data segment cannot be
sent out on one link because of network congestion, the server will
try the next available link. This unselective sending policy will
increase the whole throughput between server and client. It can
also be used as an emergency buffer rescue weapon when we need to
compete bandwidth resource with other cross traffic to maintain
smooth playback.
[0150] This dynamic sending policy will increase the whole
throughput between server and client. The media packets arriving at
the client side via multiple links are usually shuffled and arrive
out of order. Therefore, re-ordering is required at the client,
which can be preferably based on the sequence number located in the
RTP header. The client is preferably equipped with a module or
application to deal with reordering the out of sequence the RTP
packets arriving from different links and give priority to the out
of order packets to ensure the buffer is cleanly arranged for
continuous playback of the received data.
[0151] The use multiple paths or links must be applied with
management and governance by the IPTV service provider and
regulatory services so that this is a fair and friendly strategy
for today's networks, especially with many media service provider
and other types of data transmissions competing for bandwidth on
the same channel. The fair use of the DML mechanism of the first
embodiment can yield many benefits, for instance:
[0152] 1. Establish multiple links with the client/receiver device
to gauge for possible alternate network paths to a single server
streaming video data. One main TCP link, i.e. a master link, is
used under normal conditions i.e. with normal traffic conditions.
If the network throughput reduces during a streaming session, then
the DML mechanism is configured to send traffic over other TCP
links (slave links) was established at the beginning of the
session. Therefore, the plurality of links is established before
the data transmission takes placed based on network and client
conditions and these links are used in a dynamic matter as traffic
along a network channel changes.
[0153] 2. If the data flow conditions continue to deteriorate using
multiple data links, this is indicative of either A--network
congestion is caused by other traffic or B--network congestion is
caused by the physical network path. The player/client side will be
unable to determine if the reason is A or B, and even if
determined, will be unable to react to such condition. Therefore,
by making use of DML mechanism, the client can collaborate with the
server in an attempt to use DML to achieve better throughput. If
the data flow condition does not show improvement, then it may be
assumed that condition B has occurred and the data flow method may
choose to switch to a different mechanism or mode for dealing with
the abnormal condition. For instance, an adaptive streaming
strategy as set out in item 5.1.5 may be applied by the flow
control. The DML mechanism of the first embodiment allows fully
utilisation of bandwidth resource, and also treats other network
traffic fairly.
[0154] 3. The use or non-use of DML mechanism is dynamically
adjusted under predetermined conditions. For instance, in case of
condition "A" above, a possible reaction of the data flow control
method of the present invention is to use more links i.e. using the
DML mechanism, to achieve extra TCP resources to rapidly fill the
IPTV player buffer and exit the crowd in the network , as
congestion can be eased by not joining existing traffic.
[0155] This mechanism is based on cooperation between both client
and server. In an preferred model, based on network and client
conditions, the client is responsible for establishing new links,
following which he server could send media data across some or all
of the available TCP links.
[0156] The above discussion relates to links between one server and
one client. The following describes a further aspect of the dynamic
multiple link mechanism for use with more than one server capable
of streaming the required video data file.
[0157] When one or more users requests (from client players or
devices such as a set top box connected to a display device, i.e. a
television screen) for a video data stream, these requests are
routing to the healthiest streaming servers, i.e. the plurality of
servers that are best suited for delivering the requested file.
Server health is assessed based on conditions such as server loads
and ease of access to such data etc. Once the video starts
streaming from the streaming application of the servers, DML
mechanism is then used to establish all the possible route paths
between the clients to specific number of streaming servers. The
data flow control method according to the present invention applies
multiple concurrent routing mechanisms based on the DML mechanism
described above, when certain conditions are satisfied. Examples of
these conditions are given below:
[0158] 1. When a player suffers buffering even after having applied
DML strategy and is still not able to gain more data, this problem
is predicted as a route congestion issue and the data flow control
method of the present invention will check for other sources of the
data that could more efficiently transmit to the player, before
changing a flow control mode.
[0159] a. The player (client) will set up another connection to an
alternative suggested streaming server based on information on the
plurality of server available for use. Such information may be
available in an index file or data structure and comprises
information based geo-location, availability and available capacity
of each server.
[0160] b. If the player can get smooth playback from this
alternative server, then no further servers will need to be
identified.
[0161] c. Or else, the player will try to identify additional
streaming servers and will proceed to request concurrent data
transfer from all the identified streaming servers holding the same
data.
[0162] d. The player will continue to monitor the data
effectiveness from all active sources and if one specific source is
not performing as required, then it will stop the connection from
this server and request the other concurrent streaming servers to
alter the data flow pattern.
[0163] e. When the network on all paths to the players are not
effective, then the dynamic link mechanism is used across all
routes to ensure that the buffer reach to a level that is suitable
for the smooth trickle mode explained in 5.1.3 above.
[0164] 2. When the data reaches 40% buffer level and is of a high
video quality, then a hybrid Server to Client and Client to Client
data transmission method may be applied by the flow control method
of the present invention. This is explained in more detail in the
second embodiment relating to data sharing techniques. Data givers
(Server or other client devices) that have faster response times
and a better network path will be chosen as the route for data
distribution.
[0165] The multiple concurrent routing mechanisms making use of the
DML mechanism of the data flow control mechanism of the present
invention provides the flow control method of the present invention
capability to navigate via multiple traffic routes and avoid
congested segments dynamically based on network and client buffer
conditions detected.
[0166] 5.1.5 Adaptive Streaming
[0167] Adaptive bitrate streaming is a technique used in streaming
multimedia over computer networks. It works by detecting a user's
bandwidth and CPU capacity in real time and adjusting the quality
of a video stream accordingly. It requires the use of an encoder
which can encode a single source video at multiple bit rates. The
player client switches between streaming the different encodings
depending on available resources. As a result; very little
buffering, fast start time and a good experience for both high-end
and low-end connections can be obtained for IPTV applications.
Adaptive streaming is used nowadays in HLS or DASH video streaming
service. These standard progressive downloads and switch streams
are decisive in real-time based on the network flow. However,
existing adaptive streaming techniques do not consider client
player capability, buffer conditions or the video quality for
playback at the client.
[0168] The data flow control method of the present invention
proposes an adaptive streaming mechanism for switching video stream
based on the network conditions and at the same time also
considering highest video delivery. This is achieved by
collaboration between the server side and the client side to
receive information from relating to client side (player)
conditions such as the buffer level, playback remaining time and
the current sending speed. This enables a better "stream switch"
decision. Therefore, by taking into consideration network
conditions as well as buffer conditions, adaptive streaming
according to the first embodiment of the present invention can
deliver a high video quality output.
5.2. Second Embodiment
[0169] Data flow control method according to the second embodiment
of the present application is concerned with modes and mechanisms
for data sharing, local data caching and reuse of such data to
reduce network overheads. These are explained in detail below:
[0170] 5.2.1 Data Recycling Mode
[0171] Users sometimes request the same media data many times from
one or more servers. For instance, users request streaming servers
for their favourite song or a favourite video which they often
watch many times. This behaviour causes unnecessary bandwidth usage
and affects the overall internet ecosystem. Some experts predict
that this type of unnecessary repeated consumption is accountable
for 30% of bandwidth consumed every day. The same situation arises
when one family or household purchases a new release movie but not
able to make time to watch together. This leads to multiple viewing
& streaming of the same movie and consumes unnecessary
bandwidth and other network resources.
[0172] The data flow control mechanism of the present invention in
the second embodiment overcomes this by automatically caching the
last popular viewed content at the client/local device based on a
pre-set storage space in the client device. The contents that are
cached or removed from this storage can be based their popularity
score. By doing this, popularly viewed contents reside on the
device and can be re-viewed even if the device is not connected to
the internet. This prevents unnecessary retransmission to conserve
overall energy.
[0173] Data recycling mechanisms of the flow control method of the
second embodiment is available for video on demand (VOD) or replay
TV. In order to achieve this mode, a data caching policy module is
implemented at the client end along with a RAM buffer size of 20
Megs and a local storage reserve of 2 GB (HDD) for instance. The
data recycling mechanism includes rules and policies to specify
that data delivered to the client device will be indexed, organized
and recorded for later use.
[0174] FIG. 6 shows a flowchart depicting a data flow control
mechanism with data recycling, such that a local cache is consulted
before data is pulled from the server. Before a playback session is
started, the data flow control method of the present invention
first checks for the content at both local RAM and HDD storage. If
there is a copy locally, this is played immediately. In some
instances, only part of the popular content may be locally cached.
In this case, the data recycling mechanism, at the time of playback
is also configured to request the streaming server to start
streaming any missing portion of the video file. The continuing
portion of the data will be requested at the same bitrate level,
and after the first few Group of Pictures GOP, streaming is resumed
for the rest of the session. This method allows bandwidth to be
efficiently utilised only when necessary and can achieve instant
playback, which also improve user experience.
[0175] 5.2.2 Hybrid P2P Streaming
[0176] Point to point communication between servers or client
devices, commonly referred to as P2P is not a new concept and it
has been widely used in many applications over the Internet.
However P2P policies are unsuitable for high quality video
streaming for IPTV application. The data flow control methods of
the present invention propose a "Hybrid P2P" streaming mechanism
which works as a combination of Client to Server & Client to
Client P2P basis. The use of one of these P2P methods is determined
when the network is safe to share data with other peers, without
impacting smooth video playback.
[0177] The hybrid P2P mechanism of the second embodiment initially
involves requesting data from the streaming server as normal. Once
the player buffer is at a safe level, then the hybrid P2P mechanism
considers getting the data from a closest neighbouring client
device (peer) rather than requesting the server for data. In order
to co-exist with adaptive streaming strategy and provide the best
quality (see 5.1.5), a high video bitrate is exchanged in the P2P
system. This hybrid P2P streaming mode is often trigged when the
buffer is healthy and data flow control method exits the hybrid P2P
mode when the buffer is less than 15%.
[0178] When it comes to video quality, todays' IPTV users are
expecting a lot more than just Standard Definition (SD) (480
pixels). Most of the contents produced today are in High Definition
(HD) (720, 1080 pixels) quality and demanding for a new set of
streaming requirements. These requirements include higher server
specifications, multiple processing cores, larger bandwidth
backbone and extensive I/O performance for network devices.
Existing service providers using SD and also other ISPs are also
required to invest more into network upgrades. Server hosting and
delivery costs also increase as high quality video demands for more
data usage. The hybrid P2P mechanism of the data flow control
method of the present invention copes with these issues.
[0179] Hybrid P2P mechanism of data flow control is a data sharing
concept involving a combination of a server sending data to
clients, a client sharing data to other clients and a client
sharing data to many clients. This can be viewed as a hierarchy
tree structure, with server A being the original source of data ,
which provides it to client A, client A then provides this data to
client nodes B, C D and so on. Thus, the source for a leaf node X
can either be a streaming server or another leaf node that has the
same data and is capable of providing this data to leaf node X.
[0180] Utilizing hybrid P2P under a normal network condition will
eventually peak streaming at the highest video bitrate. When
network resources and speed is good and a predefined time and if
the condition remains good, the data flow control mechanism of the
second embodiment switches the player to hybrid P2P. Clients
devices that participate in P2P will need to be configured such
that they can act as a "data giver" or a "data consumer" or both
and this information can be stored in the backend systems and
accessed when a data file that is also available in a data giver's
device is requested by another client. When a client device
initially streams a movie to the device, the movie information and
the data blocks are recorded into a central database for future
distribution guides. If another client participates in the hybrid
P2P mode and requests for a particular content, the information in
the databases will guide this client to those peers that have the
content and are permitted as a "data givers". If the new client's
request found no match, the player will automatically exit hybrid
P2P and resume data flow based on the other data sharing modes of
the present invention as set out in the above embodiments of the
present invention.
[0181] In a hybrid P2P network mechanism, one client device can
share cached data with one or multiple clients within network and
vice versa. With the use of hybrid P2P, the server side can save up
to 90% of bandwidth and I/O resources. The more client devices are
registered as a data giver in a P2P network, a lesser load is
required on the server side. This allows IPTV service operators to
reduce server hosting cost significantly.
5.3 Third Embodiment
[0182] The third embodiment of the data flow control method of the
present invention includes modes and mechanisms for Video Quality
Control to ensure that the highest quality of video data is
provided to I PTV end users. These mechanisms are explained
below:
[0183] 5.3.1 Adaptive Bitrate Streaming for High Quality Video
[0184] This is derived from the adaptive bitrate streaming set out
in item 5.1.5 in relation to the first embodiment. The adaptive
streaming in the third embodiment is based on a video quality
greedy policy. Adaptive streaming is a technique used in streaming
multimedia over computer networks and functions by detecting a
user's bandwidth and CPU capacity in real time, and adjusting the
quality of a video stream accordingly. This mechanism requires the
use of an encoder which can encode a single source video at
multiple bit rates. The player client is capable of switching
between streaming the different encodings depending on available
resources. This results in very little buffering, fast start time
and a good experience for both high-end and low-end
connections.
[0185] A preferred mechanism for implementing adaptive data
streaming for high quality video data is shown in FIG. 7 and is
also explained below:
[0186] The following constant values are defined based on how many
video streams are available. Suppose we have streamNr video
streams, the following control parameters are provided:
[0187] QualityGreedyThreshSec 12 streamNr*2
[0188] QualityGreedyDurSec streamNr*2.5
[0189] NoLimitUpThreshSec 12 streamNr*8
[0190] NoLimitUpThreshSec means that the server can switch to upper
bit rate level quality without any limitation.
[0191] QualityGreedyDurSec means the length of time to maintain the
Quality Greedy Switch Policy. This policy will either keep or
switch to upper bit rate level quality.
[0192] QualityGreedyThreshSec defines when to start Quality Greedy
Switch Policy.
[0193] With these definitions, the adaptive switch process is
described as follows in relation to FIG. 7:
[0194] 7a. During playback, if cache time on player side is less
than 12 seconds; then the server will switch to the lowest bit
rate.
[0195] 7b. Otherwise, if the cache time is less than
QualityGreedyThreshSec, then we additional conditions are required
to be checked. If a current sending operation is blocked due to
network issues, and if the server sending speed is more than
1.6.times.rt_bitrate, the server will keep its current bit rate
level, and let the `Exponential Speed Up` mode in 5.1.1 control the
speed adjustment. But if the server sending speed not more than
1.6.times.rt_bitrate, the server will switch to lower bit rate
level quality.
[0196] 7c. When the cache time is less than QualityGreedyThreshSec
and the sending operation is not blocked; then if the server
sending speed is more than 1.0.times.rt_bitrate the server will
switch to upper bit rate quality level. If the server sending speed
is not more than 1.0.times.rt_bitrate, then the current video
quality is maintained.
[0197] 7d. If, when the cache time is more than
QualityGreedyThreshSec and less than NoLimitUpThreshSec, the server
sending operation is blocked due to network issue;
[0198] the adaptive streaming mechanism maintains current video
quality. If the sending operation is not blocked, this is switched
to upper bit rate level quality.
[0199] 7e. When the cache time is more than NoLimitUpThreshSec, the
server will switch to upper bit rate level quality until the
highest level.
[0200] 7f. Whenever the highest bit rate quality level is not
achieved, a maximum sending speed as 1.6.times.rt_bitrate is set.
This will limit the low quality video time slot when network
becomes good/normal again in the future.
[0201] 5.3.2 Start Bitrate Selection for High Quality Video
Data
[0202] A start bitrate selection mechanism is proposed as part of
the data flow control mechanism of the present invention. Before
playback, the local storage or cache is checked. If there is a copy
locally, the request is not streaming server and instead the player
plays the local data immediately (similar to data recycling of
5.2.1). When all the local data has been consumed, the bitrate
selection mechanism according to the third embodiment of the
present invention proposes a method streaming at the same bit rate
with the local file. The player side requests the appropriate files
from the server and starts playing. The initial bitrate does not
need to be the lowest video bitrate. There are many factors which
determine which bitrate should be played. In the present
embodiment, this depends on the resolution of the playback device
screen. Ideally if this is a big screen TV, the lowest video
bitrate is not suitable and could have a very bad video quality.
There is a therefore a balance between fast start and the video
quality to be struck in the bitrate selection mechanism of the data
flow control method of the invention.
[0203] A preferred method for implementing start bitrate selection
mechanism according to the third embodiment can be seen in FIG. 8.
Nowadays, there are many different types of devices and screen size
that has capability of displaying streaming video. Each device
uniquely support a different video resolution and sending incorrect
resolution size can cause un-viewable video or crashing of
associated hardware. Therefore there is a requirement for
resolution identification before requesting video from the server.
In the present invention, this can be achieved by implementing a
player type ID in the player software. When the player requests for
video file, the data flow control method is configured for checking
the device type ID and determining which video file is to be sent
instead of always start sending with the lowest video bitrate. The
start bitrate selection mechanism can provide noticeable results.
For example when movies are played on a big screen TV, lower video
bitrate often shows lots of flaws. Therefore, when the player type
is identified as a TV, the data flow control means can start
sending bitrates at the 2nd or 3rd highest level at the outset
rather than the lowest bitrate.
[0204] 5.3.3 Selective Frame Drop Mode
[0205] During a live IPTV streaming session, the internet
connection could sometimes fall below the lowest video bitrate and
all of the video streaming mechanisms and modes that are/were
applied may not be able to cope with the congestion. This event is
rare but can cause the buffer to be emptied and video playback can
be interrupted. Sometime a few kbps of data makes a difference
between smooth playback and video buffering. When this condition
occurs, a choice of either accepting the video buffering effect or
providing other options for maintaining smooth playback is to be
made by the data flow control means.
[0206] The selective frame drop mode of the present invention is
depicted in FIG. 9. This mechanism is a dynamic procedure to "not"
stream nond frames (aka B/P frames) within a video GOP. This will
create a video jumping effect but at the same time it allows
continuous streaming when only 20% of the required bandwidth is
available. This is probably one form of acceptable effect during a
bandwidth shortage period. Audio is not degraded or interrupted,
which in most cases will be acceptable to users.
[0207] For example, smooth playback can be ensure by setting two
dropping levels , A. dropping 50% B/P frames and B. dropping 100%
B/P frames in one GOP (Group of Picture). This will temporally
reduce 30%-80% bandwidth required and utilize this saving to
transmit the remaining video frames and the audio to the player.
During this time window, the video may present some skipping effect
and is likely to remain this way until the network can recover from
the severe temporally congestion.
[0208] This frame drop method acts as a final attempt, in case any
of the above described modes fail and can ensure that continuous
streaming is not interrupted. This ensure that the flow control
mechanism can still provide quality continuous video data and can
stream 200 kbps video files over a 100 kbps internet pipe for a
short moment to maintain smooth playback . FIGS. 10a is an
indication of the IPTV end user viewing experience when the
selective frame drop mechanism is applied and FIG. 10b is an
indication of the viewing experience without this mechanism. As
depicted, a buffering effect is inevitable in FIG. 10b.
[0209] 5.3.4 High Motion Picture First Policy
[0210] Video encoding can have different modes and filters to
enhance video quality. The data flow control method according to
the present invention provides a mechanism or policy for dealing
with high motion picture frames to enhance viewing experience,
provide the highest viewing quality and efficient manage network
resources. In H.264 Codec, VBR (Variable Bitrate Rate) encoding is
a mode that can yield good video quality output. This encoding
generates a large GOP for those fast motion moving scenes and
smaller GOP for those with less motion, and generates large GOP
(Group of Pictures) and vice versa. Each time the flow control
method processes big GOPs, it consumes more network resources which
create network spiking or jittering. If this factor is not taken
into consideration, the "fast motion pictures" scenes may trick the
data flow protocol, being used into falsely switch from the current
video quality to a next lower quality level. This is because the
big GOP may falsely alert the adaptive streaming mechanism of the
data flow control (set out in 5.3.1) to switch to a lower bitrate.
This false trigger significantly impacts the viewing experience. To
avoid this false alert, the present invention in a third embodiment
proposes a data flow control mechanism implementing a policy
described below and referred to a `high- motion pictures first` or
high-motion picture priority policy to obtain a better viewing
experience under limited network conditions.
[0211] The high-motion picture first mechanism is set out in FIG.
11. At the beginning of the video session, the first minute or so
is utilised by the data flow control method to gauge and detect
network bandwidth. If it is determined that the bandwidth is
adequate to sustain the highest video bitrates, incremental bitrate
switching is stopped and the flow control jumps directly to the
highest level. Selective GOP is also an important factor in
enhancing video viewing experience of the data flow control
mechanism of the present invention. Each GOP is inspected and their
sizes are considered. If the GOP size is much bigger than the video
bitrates, this translates into a high motion event. These big GOPs
under low network environment can lead to buffering and also big
GOPs under low encoding bitrate expose pixilation lead to poor
video quality
[0212] The GOP size is calculated by the data flow control
mechanism before sending the first packet of the moving picture in
one GOP. If the average bit rate in this GOP is 30% less than the
average bit rate of the current movie clip, this GOP is flagged as
a "Low Motion Picture GOP". If the average bit rate in one GOP is
30% more than the average bit rate of the current movie clip, this
GOP is flagged as a "Fast Motion Picture GOP".
[0213] The data flow control mechanism continues to monitor the
network throughput by checking to see if the highest video bitrate
is being streamed or not. If the stage of streaming is not at the
highest bitrate, the network condition deemed to be poor and the
data flow control method will not be able send data using the
higher bitrate at all time. This condition would impact playback
video quality significantly as a slight change of bandwidth would
falsely tell the player to request a lower bitrate. To overcome
this problem, the condition of the local buffer as well as the type
of GOP that is being sent is identified. If the buffer is at the
safe threshold and if the sending GOP is flagged as "Low Motion
Picture GOP", then the data flow control method switches to a lower
bitrate to yield additional bandwidth for a "Fast Motion Picture
GOP". Thus the data flow control mechanism clocks down to a lower
bitrate for those static or low motion picture GOPs to preserve
bandwidth for the higher GOPs at a higher bitrate. As the result, a
constant video quality as well as smooth streaming effect is
maintained.
[0214] The high motion picture first policy allows the data flow
control method to continue sending higher bitrates during a
congestion time window to always allocate more bandwidth for those
high motion picture GOPs.
[0215] 5.3.5 Buffer Enhancement and Repair
[0216] Dynamic adaptive streaming (in 5.3.1) and selection frame
drop (5.3.1) function to maintain smooth streaming the combat
internet bandwidth fluctuations and instability. However, these
mechanisms can sometimes cause negative impacts such as visible
video degrade or frame jumping in real-time, parallel with the
network condition. These negative effects are considered
unavoidable and current technologies do not address them. The data
flow method of the present invention proposes a buffer repair or
enhancement mechanism to monitor the network condition and buffer
filling rates and to then predict how much time and speed is
available to allow the flow control method to replace lower quality
GOP in the buffer with a high quality GOP.
[0217] This mechanism of buffer repair improves video playback
quality. As streaming takes place, the network fluctuates and so
does video quality. In adaptive bitrate streaming, the buffer is
segmented into multiple segments which consist of various video
bitrates that form a continuous playback timeline. Some segment
videos are of low bitrates which have a negative impact to the
viewing experience. To address this problem, buffer repair
mechanism of the data flow control is applied when the buffer
reaches a safe level i.e. 80% full. During this mode, the flow
control method is configured to check for previously streamed
segments in the buffer that have low video bitrates and are still
queuing for playing. The buffer repair mechanism is then configured
request the segments to be replaced with higher video bitrates,
before it turns to playback. This ensures that the first part of
the buffer always has the highest video bitrate and playback with
highest video quality.
[0218] The buffer repair or enhancement mechanism is shown in FIG.
12 and is explained in detail below:
[0219] 12a. During the playback session, the flow control mechanism
ensures that the player maintains a one GOP queue which stores all
the GOP data that will be sent to the video decoder.
[0220] 12b. The player monitors the GOP queue periodically. If the
time span of this queue is less than 10 seconds, then no action is
taken. Otherwise, the player will check whether there is any GOP in
the queue having only part of B/P frames (Partial GOP). If there is
Partial GOP not be sent to the decoder in 10 seconds, then the
mechanism is configured to check current server sending speed. If
server sending speed is less than or equal to 1.0.times.rt_bitrate,
then no action is taken. Otherwise, if the sending speed is more
than 1.0.times.re_bitrate, the flow control mechanism requests the
server to resend that GOP with all frames at the same quality.
[0221] 12c. After receiving the GOP with all frames resent by
server, player will then use this GOP to replace the Partial GOP in
the GOP queue.
[0222] 12d. If there is no Partial GOP in the queue, and if the
queue time span is less than 15 seconds, no action is taken.
[0223] 12e. Otherwise, the lowest quality GOP is identified in this
queue and compared with the current receiving GOP quality. If the
lowest quality GOP is higher than the current data, no action is
taken. Or else if the server current sending speed is more than
1.0.times.rt_bitrate, player will request the server to resend this
GOP at one level higher quality.
[0224] 12f. After receiving this GOP at one level higher quality
resent by server, the data flow mechanism uses this GOP to replace
old GOP in the GOP queue.
5.4 Fourth Embodiment
[0225] The following mechanisms provide flow control techniques
that can be applied to existing TCP data transmissions to provide
an enhanced data flow control method according to a fourth
embodiment of the present invention.
[0226] 5.4.1 Dynamic Nagle Algorithm
[0227] The Nagle Algorithm explained in the Background section 2
has a default (200+ms time delay) negative impact to IPTV services,
especially when the users initiate interactive service such as
channel changing, content queries, accounting etc. Therefore the
present invention proposes a method to dynamically enable/disable
Nagle algorithm based on the type of action and request to ensure
the best effect can be achieved. Ideally, the flow control method
of the fourth embodiment disables the Nagle algorithm when a
command exchange between the user device and the server is
detected. This eliminates at least 200 ms delay on the TCP
transport layer. By doing this, it is possible to reduce the number
of packets that are going to be injected into the network and also
improve user interactive experience.
[0228] A preferred procedure for applying the dynamic Nagle
algorithm application as explained above is shown in FIG. 13.
Details of tests conducted when the Nagle algorithm was in an
enabled, disabled and an adaptive state is shown in FIG. 14a, with
the different packet sending rate for each of the above mention
states shown in FIG. 14b. These tests were carried out in a LAN
environment.
[0229] 5.4.2 Amended Linux Controls
[0230] The following Linux controls can be applied to existing TCP
to provide an enhanced data flow control according to the present
invention.
[0231] A. net.ipv4.tcp_window_scaling=1
[0232] This parameter allows TCP to use big window size on receiver
and sender. This will increase overall throughput.
[0233] B. net.ipv4.tcp_timestamps=1
[0234] This parameter allows TCP to use time stamp option in its
header. This will help TCP to estimate the RTT value (round trip
time)
[0235] C. net.ipv4.tcp_sack=1
[0236] This parameter allows TCP receiver to send selective
acknowledgments to report multiple packet loss instead only one
packet per acknowledgement. This will help the sender to retransmit
the lost packets more quickly.
[0237] D. net.ipv4.tcp_congestion_control=cubic
[0238] This parameter is only valid for kernel 2.6.13 or later
versions. It allows user to change the congestion control algorithm
to get better performance of the special applications.
[0239] E. net.core.rmem_max/net.core.wmem_max
[0240] There parameters control the window size advertised by the
sender and receiver. They will affect TCP throughput by limiting
data on the fly in the network. This window size can be enlarged
according to different network environment supplied.
[0241] F. The new kernel
[0242] From Linux 2.6.17 version onwards, the cwnd can be up to 4
MB. This will increase TCP throughput on high speed network.
5.5 Interaction of the Data Flow Control Techniques jof the Present
Invention
[0243] The interaction of the above described modes, policies ,
methods and mechanisms that make up the proposed data flow control
protocol or method of the first, second and third embodiment of the
present invention is shown in FIG. 13.
[0244] While certain embodiments have been described, these
embodiments have been presented by way of example only, and are not
intended to limit the scope of the invention. Indeed, the novel
devices, methods, and products described herein may be embodied in
a variety of other forms; furthermore, various omissions,
substitutions and changes in the form of the methods and systems
described herein may be made without departing from the spirit and
scope of the invention. The accompanying claims and their
equivalents are intended to cover such forms or modifications as
would fall within the scope of the embodiments.
* * * * *