U.S. patent application number 12/811971 was filed with the patent office on 2010-11-04 for systems and methods for using dc change parameters in video coding and decoding.
This patent application is currently assigned to TELEFONAKTIEBOLAGET LM ERICSSON (PUBL). Invention is credited to Kenneth Andersson, Per Frojdh, Clinton Priddle, Jonatan Samuelsson, Rickard Sjoberg.
Application Number | 20100278269 12/811971 |
Document ID | / |
Family ID | 40202148 |
Filed Date | 2010-11-04 |
United States Patent
Application |
20100278269 |
Kind Code |
A1 |
Andersson; Kenneth ; et
al. |
November 4, 2010 |
Systems and Methods for using DC Change Parameters in Video Coding
and Decoding
Abstract
The present application discloses systems and methods for using
DC change parameters in video coding. In one embodiment, the method
includes the steps of: (a) obtaining a DC change parameter; (b)
decoding encoded video data to obtain reconstructed pixel values;
and (c) using the reconstructed pixel values, a filter, and the DC
change parameter to obtain filtered reconstructed pixel values with
a DC change.
Inventors: |
Andersson; Kenneth; (Gavle,
SE) ; Frojdh; Per; (Stockholm, SE) ; Priddle;
Clinton; (Upplands Vasby, SE) ; Samuelsson;
Jonatan; (Skarpnack, SE) ; Sjoberg; Rickard;
(Stockholm, SE) |
Correspondence
Address: |
COATS & BENNETT, PLLC
1400 Crescent Green, Suite 300
Cary
NC
27518
US
|
Assignee: |
TELEFONAKTIEBOLAGET LM ERICSSON
(PUBL)
Stockholm
SE
|
Family ID: |
40202148 |
Appl. No.: |
12/811971 |
Filed: |
January 8, 2009 |
PCT Filed: |
January 8, 2009 |
PCT NO: |
PCT/SE2009/050005 |
371 Date: |
July 7, 2010 |
Current U.S.
Class: |
375/240.16 ;
375/240.01; 375/E7.243 |
Current CPC
Class: |
H04N 19/61 20141101;
H04N 19/82 20141101; H04N 19/136 20141101; H04N 19/176 20141101;
H04N 19/70 20141101; H04N 19/117 20141101; H04N 19/463
20141101 |
Class at
Publication: |
375/240.16 ;
375/240.01; 375/E07.243 |
International
Class: |
H04N 7/32 20060101
H04N007/32 |
Foreign Application Data
Date |
Code |
Application Number |
Jan 8, 2008 |
US |
61/019727 |
Apr 11, 2008 |
US |
61123769 |
Claims
1-28. (canceled)
29. A method performed by a video codec, comprising: (a) obtaining
a DC change parameter; (b) decoding encoded video data to obtain
reconstructed pixel values; and (c) applying the DC change
parameter to the reconstructed pixel values as a part of a
filtering process, or after a filtering process, to obtain filtered
reconstructed pixel values with a DC change.
30. The method of claim 29, further comprising obtaining a DC
change parameter for each of a plurality of coding parameters.
31. The method of claim 30, wherein at least one of said coding
parameters is a sub-pel property of a motion vector used in inter
frame prediction.
32. The method of claim 29, wherein the filtered reconstructed
pixel values are an inter frame prediction.
33. The method of claim 29, wherein the filtered reconstructed
pixel values are a loop filtered reconstruction.
34. The method of claim 29, wherein said applying the DC change
parameter to the reconstructed pixel values as part of a filtering
process comprises: using the DC change parameter to modify the
filter to obtain a modified filter; and filtering the reconstructed
pixel values using the modified filter.
35. The method of claim 29, wherein said applying the DC change
parameter to the reconstructed pixel values after a filtering
process comprises: filtering the reconstructed pixel values using
the filter to produce filtered reconstructed pixel values; and
modifying the filtered reconstructed pixel values by addition with
the DC change parameter.
36. The method of claim 29, wherein said applying the DC change
parameter to the reconstructed pixel values after a filtering
process comprises: filtering the reconstructed pixel values using
the filter to produce filtered reconstructed pixel values; and
modifying the filtered reconstructed pixel values by multiplication
with the DC change parameter.
37. The method of claim 29, further comprising using the filtered
reconstructed pixel values to produce a motion vector and a
predicted block of pixel values.
38. The method of claim 37, further comprising: determining a
prediction error block corresponding to the predicted block of
pixel values; applying a frequency selective transform on the
prediction error block to obtain a block of transform coefficients;
and quantizing and entropy encoding the block of transform
coefficients.
39. The method of claim 38, further comprising: entropy encoding
the DC change parameter; and transmitting to a decoder the motion
vector, the entropy encoded DC change parameter, and the quantized
and entropy encoded block of transform coefficients.
40. A method of decoding encoded video data, comprising: (a)
receiving the encoded video data, the encoded video data comprising
a motion vector and a DC change parameter associated with the
motion vector, (b) producing a predicted block of pixel values
using a block of pixel values of a previously decoded frame, the
motion vector and the DC change parameter, where the DC change
parameter is aligned with motion compensation of a current block;
and (c) using the predicted block of pixel values and prediction
error information to produce a reconstructed block of pixel
values.
41. The method of claim 40, wherein the DC change parameter
included in the encoded video data is entropy encoded and the
method further comprises entropy decoding the DC change parameter
prior to using the DC change parameter to produce the predicted
block of pixel values.
42. A video codec, comprising: a data processing system; and a data
storage system storing video codec software, the video codec
software comprising: computer instructions for obtaining
reconstructed pixel values; and computer instructions for applying
the DC change parameter to the reconstructed pixel values as a part
of a filtering process, or after a filtering process, to obtain
filtered reconstructed pixel values with a DC change.
43. The video codec of claim 42, further comprising computer
instructions for obtaining a set of DC change parameters, each DC
change parameter included in the set being associated with a coding
parameter.
44. The video codec of claim 43, wherein at least one of said
coding parameters is a sub-pel property of a motion vector used in
inter frame prediction.
45. The video codec of claim 42, further comprising a receiver for
receiving encoded video data.
46. The video codec of claim 45, wherein the received encoded video
data comprises an entropy encoded DC change parameter; and the
video codec software further comprises computer instructions for
entropy decoding the DC change parameter.
47. The video codec of claim 42, wherein the filtered reconstructed
pixel values are an inter frame prediction.
48. The video codec of claim 42, wherein the filtered reconstructed
pixel values are a loop filtered reconstruction.
49. The video codec of claim 42, wherein the computer instructions
for using the reconstructed pixel values, the filter, and the DC
change parameter to obtain the filtered reconstructed pixel values
with the DC change comprises computer instructions for using the DC
change parameter to modify the filter to obtain a modified filter
and filtering the reconstructed pixel values using the modified
filter.
50. The video codec of claim 42, wherein the computer instructions
for using the reconstructed pixel values, the filter, and the DC
change parameter to obtain the filtered reconstructed pixel values
with the DC change comprises computer instructions for filtering
the reconstructed pixel values using the filter to produce filtered
reconstructed pixel values and modifying the filtered reconstructed
pixel values by addition with the DC change parameter.
51. The video codec of claim 42, wherein the computer instructions
for using the reconstructed pixel values, the filter, and the DC
change parameter to obtain the filtered reconstructed pixel values
with the DC change comprises computer instructions for filtering
the reconstructed pixel values using the filter to produce filtered
reconstructed pixel values and modifying the filtered reconstructed
pixel values by multiplication with the DC change parameter.
52. The video codec of claim 42, wherein the video codec software
further comprises computer instructions for using the filtered
reconstructed pixel values to produce a motion vector and a
predicted block of pixel values.
53. A video decoder, comprising: a receiver for receiving encoded
video data, the encoded video data comprising a DC change parameter
associated with a coding parameter; a data storage system storing
video codec software; and a data processing system operable to
execute the video codec software, wherein the video codec software
comprises: computer instructions for producing a predicted block of
pixel values using a block of pixel values of a previously decoded
frame and the DC change parameter, wherein the DC change parameter
is aligned with motion compensation of a current block; and
computer instructions for using the predicted block of pixel values
and prediction error information to produce a reconstructed block
of pixel values.
54. The video decoder of claim 53, wherein the DC change parameter
included in the encoded video data is entropy encoded and the video
codec software further comprises computer instructions for entropy
decoding the DC change parameter.
55. The video decoder of claim 53, wherein the coding parameter is
a sub-pel property of a motion vector used in inter frame
prediction.
56. The video decoder of claim 53, wherein the received encoded
video data comprises a set of DC change parameters, each DC change
parameter included in the set being associated with a coding
parameter.
Description
TECHNICAL FIELD
[0001] The present invention generally relates to video codecs
(i.e., video encoders or video decoders). In one aspect, the
invention relates to using DC change parameters in video coding and
decoding.
BACKGROUND
[0002] When encoding a sequence of video frames, temporal
redundancy can be exploited by predicting pixel values for a frame
based on pixel values from one or more previous frames. Pixel value
prediction is an important part of video coding standards such as
H.261, H.263, MPEG-4 and H.264. In H.264 there are two pixel
prediction methods, namely intra prediction and inter prediction.
Intra prediction gives a spatial prediction of the current block
using previously reconstructed pixels values in the current frame.
Inter prediction gives a temporal prediction of the current block
using a corresponding but displaced block in a previously decoded
frame (e.g. reference block). Inter prediction can also use a
weighted average of two inter predictions and additive offsets of
the predictions. The prediction error compared to the original is
then transform coded and quantized. A reconstructed frame (i.e.,
reconstructed pixel values) can be generated by adding the
prediction and the coded prediction error. A loop filter is then
applied to reduce coding artifacts before storing the frame in a
reference frame buffer for later use by inter prediction.
[0003] In H.264 it is possible to address average pixel value (e.g.
DC value) variations between frames in inter prediction using a
multiplicative reference frame weighting factor or an additive
reference frame offset to the inter prediction. However it is not
possible to address such variations for specific parts of a coded
region.
[0004] Accordingly, what is desired are improved systems and
methods for video encoding/decoding.
SUMMARY
[0005] In one aspect, the invention provides a method for using DC
change parameters in video coding performed by a video codec (as
used herein the term "video codec" means "video encoder or video
decoder"). In some embodiments, this method includes the following
steps: (a) obtaining a DC change parameter; (b) decoding encoded
video data to obtain reconstructed pixel values; and (c) using the
reconstructed pixel values, a filter, and the DC change parameter
to obtain filtered reconstructed pixel values with a DC change. In
some embodiments, the method also includes obtaining a DC change
parameter for each of a plurality of coding parameters, where at
least one of said coding parameters is a sub-pel property of a
motion vector used in inter frame prediction.
[0006] In some embodiments, the filtered reconstructed pixel values
is an inter frame prediction. In other embodiments, the filtered
reconstructed pixel values is a loop filtered reconstruction.
[0007] In some embodiments step (c) comprises using the DC change
parameter to modify the filter to obtain a modified filter; and
filtering the reconstructed pixel values using the modified filter.
In other embodiments step (c) comprises filtering the reconstructed
pixel values using the filter to produce filtered reconstructed
pixel values; and modifying the filtered reconstructed pixel values
by addition with the DC change parameter. In yet other embodiments,
step (c) comprises filtering the reconstructed pixel values using
the filter to produce filtered reconstructed pixel values; and
modifying the filtered reconstructed pixel values by multiplication
with the DC change parameter.
[0008] In some embodiments, the method also includes using the
filtered reconstructed pixel values to produce a motion vector and
a predicted block of pixel values; determining a prediction error
block corresponding to the predicted block of pixel values;
applying a frequency selective transform on the prediction error
block to obtain a block of transform coefficients; quantizing and
entropy encoding the block of transform coefficients; entropy
encoding the DC change parameter; and transmitting to a decoder the
motion vector, the entropy encoded DC change parameter, and the
quantized and entropy encoded block of transform coefficients.
[0009] In another aspect, the present invention provides a method
of decoding encoded video data. I some embodiments, the method
includes the following steps: (a) receiving the encoded video data,
the encoded video data comprising a motion vector and a DC change
parameter associated with the motion vector; (b) producing a
predicted block of pixel values using a block of pixel values of a
previously decoded frame, the motion vector and the DC change
parameter; and (c) using the predicted block of pixel values and
prediction error information to produce a reconstructed block of
pixel values. In some embodiments, the DC change parameter included
in the encoded video data is entropy encoded and the method further
comprises entropy decoding the DC change parameter prior to using
the DC change parameter to produce the predicted block of pixel
values.
[0010] In another aspect, the invention provides an improved video
codec. In some embodiments, the improved video codec comprises: a
data processing system; and a data storage system storing video
codec software, the video codec software comprising: computer
instructions for obtaining reconstructed pixel values; and computer
instructions for using the reconstructed pixel values, a filter,
and a DC change parameter to obtain filtered reconstructed pixel
values with a DC change.
[0011] In some embodiments, the video codec software further
includes computer instructions for obtaining a set of DC change
parameters, where each DC change parameter included in the set is
associated with a coding parameter, and at least one of said coding
parameters is a sub-pel property of a motion vector used in inter
frame prediction.
[0012] In some embodiment, the video coded also includes a receiver
for receiving encoded video data, where the received encoded video
data comprises an entropy encoded DC change parameter, and the
video codec software further comprises computer instructions for
entropy decoding the DC change parameter.
[0013] In some embodiments, the filtered reconstructed pixel values
is an inter frame prediction, while in other embodiments, the
filtered reconstructed pixel values is a loop filtered
reconstruction.
[0014] In some embodiments, the computer instructions for using the
reconstructed pixel values, the filter, and the DC change parameter
to obtain the filtered reconstructed pixel values with the DC
change comprises computer instructions for using the DC change
parameter to modify the filter to obtain a modified filter and
filtering the reconstructed pixel values using the modified filter.
In other embodiments, the computer instructions include computer
instructions for filtering the reconstructed pixel values using the
filter to produce filtered reconstructed pixel values and modifying
the filtered reconstructed pixel values by addition with the DC
change parameter. In yet other embodiments, the computer
instructions include computer instructions for filtering the
reconstructed pixel values using the filter to produce filtered
reconstructed pixel values and modifying the filtered reconstructed
pixel values by multiplication with the DC change parameter.
[0015] In some embodiments, the video codec software further
comprises computer instructions for using the filtered
reconstructed pixel values to produce a motion vector and a
predicted block of pixel values.
[0016] In another aspect, the invention provides an improved video
decoder. In some embodiments, the improved video decoder includes:
a receiver for receiving encoded video data, the encoded video data
comprising a DC change parameter associated with a coding parameter
(e.g., a sub-pel property of a motion vector used in inter frame
prediction); a data storage system storing video codec software;
and a data processing system operable to execute the video codec
software, wherein the video codec software comprises: computer
instructions for producing a predicted block of pixel values using
a block of pixel values of a previously decoded frame and the DC
change parameter; and computer instructions for using the predicted
block of pixel values and prediction error information to produce a
reconstructed block of pixel values.
[0017] In some embodiments, the DC change parameter included in the
encoded video data is entropy encoded and the video codec software
further comprises computer instructions for entropy decoding the DC
change parameter.
[0018] The above and other aspects and embodiments are described
below with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated herein and
form part of the specification, illustrate various embodiments of
the present invention and, together with the description, further
serve to explain the principles of the invention and to enable a
person skilled in the pertinent art to make and use the invention.
In the drawings, like reference numbers indicate identical or
functionally similar elements.
[0020] FIG. 1 illustrates a system according to an embodiment of
the invention.
[0021] FIG. 2 is a functional block diagram illustrating an
embodiment of the invention.
[0022] FIG. 3 is a flow chart illustrating a process according to
an embodiment of the invention.
[0023] FIG. 4 is a flow chart illustrating a process according to
an embodiment of the invention.
[0024] FIG. 5 is a block diagram of an encoder according to an
embodiment of the invention.
[0025] FIG. 6 is a block diagram of a decoder according to an
embodiment of the invention.
DETAILED DESCRIPTION
[0026] Referring now to FIG. 1, FIG. 1 illustrates a system 100
according to an embodiment of the invention. As shown in FIG. 1,
system 100 includes a video encoder 102 that receives source video
data 101 and is configured to encode the source video data 101 to
produce encoded video data 103. The encoded video data 103 is
provided to a decoder 104. Decoder 104 is configured to decode the
encoded video data 103 to produce decoded video data 105, which may
be displayed by a video display 106.
[0027] It has been recognized that, typically, low-level features
of an image are similar in a local region or in several local
regions in the image. One example of a low-level feature is the
average value of pixels in a local region. There are also local
regions which are changed similarly between two time instants.
[0028] In one aspect of the invention, an aim is to address these
variations that similarly happen in one or several local regions by
low-level adjustments valid for a larger region. In this way, local
low-level luminance and chrominance variations between original
local regions and coded local regions can be addressed by global
low-level side information. The global low-level side information
can thereby aid local side information providing efficiently
encoded local regions. This is illustrated in FIG. 2, which shows
that global low-level side information 201 can be utilized together
with local side information 202 to produce an encoded/decoding
local region 203. That is, global low-level side information 201
can be used to aid encoding/decoding of a local region. In some
embodiments, global low-level information 201 includes information
related to changes in average pixel value in a local region between
an original local region and a coded local region.
[0029] In addition to global low-level information 201 including
information related to changes in average pixel value, global
low-level information 201 may include information relating to
spatial variation in pixel values (e.g., gradient in vertical
and/or horizontal direction). For example, generally, pixel values
are larger closer to a light source than further away from it. The
pixel value intensity decays inversely proportional with the
squared distance to the light source. Thus when an object is moving
closer to a light source the pixel values of the object is
increased. However not only the average value is increased also the
variation in pixel values between one side of the object and the
other side of the object can also be increased (e.g. the gradient
is changed). Therefore, it may be useful to consider spatial
variation in pixel values.
[0030] Another feature of the present invention is to store
low-level information related to differences between original local
regions and coded local regions and use the low-level information
globally as a starting point for encoding/decoding several local
regions. One example of such low-level information is a transform
coefficient for encoding the difference between an original region
and a predicted region. Accordingly, a feature of the invention is
to use this global low-level information to aid encoding/decoding
of a local region, wherein the global low-level information relates
to common local differences between original local regions and
coded local regions. In some embodiments, it is important that the
global low-level information is used efficiently in
encoding/decoding local regions. Accordingly, in some embodiments,
the global low-level information is attached to the sub-pel
property of the interpolation filters in motion compensated inter
prediction. For example, in some embodiments, the global low-level
information is specific to the sub-pel property of the motion
information. All regions which are inter predicted with the same
sub-pel property of the motion information, for example quarter-pel
motion both horizontally and vertically, use same global low-level
information for improving the inter prediction. Another example is
to link the global low-level information to the mode of the intra
prediction information. All regions which are intra predicted with
the same intra prediction mode, for example vertical prediction,
use the same global low-level information for improving the intra
prediction.
[0031] The low-level variations can be accomplished in two
principal ways, either independent or dependent of the reference
pixel values. Dependence is achieved by either having the used
filter coefficients to sum up to a certain constant (e.g., gain),
which then defines a multiplicative scaling of the reference pixel
values or to have an explicit multiplicative scaling of the filter
output. If the gain is 1 no scaling of the pixel values are
performed. The independence approach is achieved by an additive
scaling of the filter output. Filter output is the result of
applying a filter on the reference pixel values. The additive
scaling can either be performed before rounding and right shift in
fixed point implementation of the filtering or after the filtering
process. The choice depends on desired accuracy of the additive
scaling. According, embodiments of the invention may: (1) use
global low-level information for changing the gain of a selected
interpolation filter, (2) multiply the global low-level information
with an output from filtering reference pixel values, and (3) add
the global low-level information to a filter output from filtering
reference pixel values.
[0032] The embodiments of the invention described herein can be
used in motion compensated filtering, post/loop filtering or intra
prediction. In post/loop filtering the current reconstructed frame
(a.k.a., the reference frame) is filtered to better match the
current original frame. If the filtering is outside the loop it is
referred to post-filtering and if it is inside the coding loop it
is referred to loop filtering. In motion compensated filtering a
previous coded frame from another time instant (e.g. reference
frame) is filtered to better fit the current original frame. In
intra prediction previously reconstructed pixel values (e.g.,
reference pixel values) of the current frame is filtered and
extrapolated to better fit the current region to be coded. In all
three examples some filter is applied to previously coded pixel
values to get a better match with the original pixel values.
Accordingly, embodiments of the invention may be used to: improve
motion compensated inter prediction, improve intra prediction,
refine reconstructed pixel values before storage, and refine
reconstructed pixel values before showing them on a display.
[0033] As mentioned above, it is typical for a filter (e.g., a
filter function--such as a spatial transfer function) to be applied
to previously coded pixels in an encoding or decoding process. For
example, a filter function F may be applied to a block of pixel
values R in a current frame to produce filtered pixel values P as
illustrated below in Equation 1
P ( k , l ) = i = 0 N - 1 j = 0 M - 1 F ( i , j ) R ( k - i + int (
N 2 ) , l - j + int ( M 2 ) ) , ( Equation 1 ) ##EQU00001##
where P (k,l) is a pixel at row k and column l of the filtered
frame, R is a block of pixel values, F(i,j) is a value of a two
dimensional spatial transfer function with N rows and M columns at
position (i,j), k and l are offsets positioning the filtered block
corresponding to the position of the current block in the frame of
pixel values, int is the truncation on integer function. H.264, for
example, defines a set of filters including the following filter
for half-pel interpolation in one direction [1-5 20 20-5 1]/32.
[0034] In both inter frame prediction and loop filtering, the
filtering takes place inside a coding loop in video coding. In
inter frame prediction, the frame of pixel values R is a previously
decoded and reconstructed frame from another time instant than the
time instant of the current block to be predicted (i.e., R
comprises reconstructed pixel values). The position is also offset
according to the motion between the frames. In the loop filtering,
the frame of pixel values R is the currently reconstructed frame
before display and storage for inter frame prediction. The loop
filtered frame can then be used for display and for inter frame
prediction.
[0035] When filtering is implemented in fixed point arithmetic,
which is usually the case, the filtering shown in Equation 1 can be
performed directly using the quantized values as shown below in
equation 2:
P ( k , l ) = ( i = 0 N - 1 j = 0 M - 1 F _ ( i , j ) R ( k - i +
int ( N 2 ) , l - j + int ( M 2 ) ) + 2 B - 1 ) >> B , (
equation 2 ) ##EQU00002##
where >>corresponds to down shift with B bits or similarly
division by 2.sup.B, and 2.sup.B-1 corresponds to a rounding
factor. Where F is a quantized version of F in Equation 1.
Quantization of one filter coefficient is defined as:
F(i,j)=sign(F(i,j))*int(abs(F(i,j))*2.sup.B+0.5), wherein abs
(F(i,j)) is the magnitude of F(i,j), sign(F(i,j)) is the sign of
F(i,j).
[0036] Referring now to FIG. 3, FIG. 3 is a flow chart illustrating
a process 300, according to some embodiments of the invention, for
inter frame prediction. Process 300 may begin in step 302, where
for each block of a frame of pixel values a motion vector and a
predicted block is produced using the techniques that are well
known in the art. For example, as is well known in the art, the
step of determining a predicted block may include filtering
reconstructed pixel values to obtain filtered reconstructed pixel
values.
[0037] In step 304, for each of the blocks, a prediction error
block is determined (e.g., the pixel differences between the
original block and the predicted block) using well known
techniques.
[0038] In step 306, for each prediction error block, a frequency
selective transform is applied on the prediction error block to
obtain a block of transform coefficients.
[0039] In step 308, for each block of transform coefficients, the
block of transform coefficients is quantized and entropy
encoded.
[0040] In step 310, one or more DC change parameters (e.g., optimal
DC change parameters) are determined for respective full-pixel
and/or sub-pixel predicted blocks. A DC change parameter, in some
embodiments, is a parameter that varies all of the reconstructed
pixel values in a region and/or block similarly. In one embodiment,
the DC change parameters are determined by first defining one or
several sets of blocks of reconstructed pixel values that shall be
modified with the DC change parameter. One criterion is to select
blocks that have large errors in comparison to the original pixel
values. Another criterion is to select blocks that have one similar
or same coding parameter. Some examples of coding parameters are
inter prediction modes (sub-pixel property of the motion vector)
and intra prediction modes. Next, for each defined set of blocks,
minimize the squared error E.sup.2 between original pixel values
S.sub.n, and filtered reconstructed pixel values R.sub.n by finding
the optimal DC change parameter g, where
E 2 = n ( S n - g ( R n ) ) 2 , ##EQU00003##
in some embodiments. This can be preformed with standard least
square optimization techniques. An alternative approach is to test
different values of the DC change parameter and select the one that
gives the least error.
[0041] In step 312, for each block of the frame of pixel values, a
new motion vector and a new predicted block is produced using a
determined DC change parameter.
[0042] In step 313, steps 304-308 are performed again, but this
time using the motion vectors and the predicted blocks produced
from performing step 312.
[0043] In step 314, the DC change parameters are entropy encoded
together with the other video coding parameters.
[0044] In step 316, the produced motion vectors and entropy encoded
transform coefficients and the DC change parameters are provided
(e.g., transmitted) to an decoder.
[0045] As mentioned above, the step of producing the predicted
block and motion vector may include filtering reconstructed pixel
values to obtain filtered reconstructed pixel values. For example,
step 302 may include computing filtered reconstructed pixel values
using Equation 1 or Equation 2. In such embodiments, the step of
using the determined DC change parameters in step 312 may include
modifying the filter functions that were used in step 302 and using
the modified filter functions to produce filtered reconstructed
pixel values with a DC change, which are used to produce the
predicted blocks and motion vectors. For example, if F.sub.r is a
filter function that was used in the performance of step 302, then
step 312 may include: (a) modifying F.sub.r using the DC change
parameter determined in step 310 to produce a modified filter
function F.sub.m, (b) using F.sub.m to produce filtered
reconstructed pixel values with a DC change, and (c) using the
filtered reconstructed pixel values to produce the predicted blocks
and motion vectors. In some embodiments, F.sub.m may be produced
using the following equation: F.sub.m=F.sub.r+gF.sub.dc, where g is
a DC change parameter and F.sub.dc is a spatial filter function
that modifies DC according to the DC change parameter g. One
example of F.sub.dc is a filter with one filter coefficient equal
to one and the rest equal to zero (e.g., [0 1 0 0 0 0]). Another
example of F.sub.dc is a filter with two filter coefficients equal
to one and the rest equal to zero (e.g., [0 1 0 0 1 0]).
Accordingly, in some embodiments, step 312 may include computing
filtered reconstructed pixel values with a DC change using the
following equation:
R f ( k , l ) = i = 0 N - 1 j = 0 M - 1 F m ( i , j ) R ( k - i +
int ( N 2 ) , l - j + int ( M 2 ) ) , ##EQU00004##
where Rf(k,l) is a filtered reconstructed pixel value.
[0046] In other embodiments, the step of using the determined DC
change parameters in step 312 may include modifying filtered
reconstructed pixel values by multiplication with a DC change
parameter g, as follows:
R f ( k , l ) = g i = 0 N - 1 j = 0 M - 1 F r ( i , j ) R ( k - i +
int ( N 2 ) , l - j + int ( M 2 ) ) . ##EQU00005##
[0047] In yet another embodiment, the step of using the determined
DC change parameters in step 312 may include modifying filtered
reconstructed pixel values by addition with a DC change parameter
g, as follows:
R f ( k , l ) = g + i = 0 N - 1 j = 0 M - 1 F r ( i , j ) R ( k - i
+ int ( N 2 ) , l - j + int ( M 2 ) ) . ##EQU00006##
[0048] In some embodiments, the above techniques may be
combined.
[0049] Referring now to FIG. 4, FIG. 4 is a flow chart illustrating
a process 400, according to an embodiment of the invention, for
decoding video data encoded according to process 300.
[0050] Process 400 may begin in step 402, where decoder 104
receives video data encoded according to process 300. This received
encoded video data includes video coding parameters. More
specifically, in some embodiments, the received encoded video data
includes entropy encoded video coding parameters.
[0051] In step 404, the decoder entropy decodes the video coding
parameters included in the received encoded video data. These video
encoding parameters include one or more DC change parameters, and
each of the one or more DC change parameters is associated with one
or more motion vectors included in the received encoded video data.
More specifically, in some embodiments, each DC change parameter is
associated with a type of motion vector. As an example, all motion
vectors of a certain type (e.g., all motion vectors having a
certain sub-pixel interpolation property) are associated with a DC
change parameter. As another example, a DC change parameter is
associated with all full-pixel motion vectors.
[0052] In step 406, the decoder produces a predicted block of pixel
values using a block of pixel values of a previously decoded frame,
a motion vector, and the DC change parameter associated with the
motion vector. For example, in step 406 the decoder produces a
predicted block of pixel values by displacing a corresponding block
of pixel values of a previously decoded frame (i.e., reconstructed
pixel values) according to the motion vector, which has a sub-pel
property, and the DC change parameter linked with the sub-pel
property of the motion vector.
[0053] In step 408, the decoder inverse quantizes and inverse
transforms a prediction error associated with the motion
vector.
[0054] In step 410, the decoder adds the inverse quantized and
inverse transformed prediction error to the predicted block to
obtain a reconstructed block of pixel values.
[0055] In step 412, the reconstructed block of pixel values may be
stored or may be used to output an image on display 106.
[0056] Referring now to FIG. 5, FIG. 5 is a functional block
diagram of encoder 102 according to some embodiments of the
invention. As shown, encoder 102 may comprise a data processing
system 502 (e.g., one or more microprocessors), a data storage
system 506 (e.g., one or more non-volatile storage devices) and
computer video codec software 508 stored on the storage system 506.
Configuration parameters 510 may also be stored in storage system
506. Encoder 102 also includes transmit (Tx) circuitry 504 for
transmitting encoded video data 103 to decoder 104 and receiver
(Rx) circuitry 505 for receiving source video 101 from a video
source 562 (alternatively source video may be stored in storage
system 506). Video codec software 508 is configured such that when
processing system 502 executes video codec software 508, encoder
102 performs steps described above (e.g., steps described above
with reference to the flow chart shown in FIG. 3). For example,
video codec software 508 may include: (1) computer instructions
configured to determine a DC change parameter; (2) computer
instructions configured to decode encoded video data to obtain
reconstructed pixel values; and (3) computer instructions
configured to use the reconstructed pixel values, a filter, and the
DC change parameter to obtain filtered reconstructed pixel values
with a DC change.
[0057] Referring now to FIG. 6, FIG. 6 is a functional block
diagram of decoder 104 according to some embodiments of the
invention. As shown, decoder 104 may comprise a data processing
system 602 (e.g., one or more microprocessors), a data storage
system 606 (e.g., one or more non-volatile storage devices) and
computer video codec software 608 stored on the storage system 606.
Configuration parameters 610 may also be stored in storage system
606. Decoder 104 also includes receiver (Rx) circuitry 604 for
receiving encoded video data 103 from encoder 102. Decoder 104 may
also include a driver 662 for receiving decoded video data 105 and
for using the decoded video data 105 to drive display 106 so that a
user of decoder 104 can view images produced from the decoded video
data. Video codec software 608 is configured such that when
processing system 602 executes video codec software 608, decoder
104 performs steps described above (e.g., steps described above
with reference to the flow chart shown in FIG. 4). For example,
video codec software 608 may include: (1) computer instructions for
receiving the encoded video data, the encoded video data comprising
a motion vector and a DC change parameter associated with the
motion vector; (2) computer instructions for producing a predicted
block of pixel values using a block of pixel values of a previously
decoded frame, the motion vector and the DC change parameter; and
(3) computer instructions for using the predicted block of pixel
values and prediction error information to produce a reconstructed
block of pixel values.
[0058] While various embodiments of the present invention have been
described above, it should be understood that they have been
presented by way of example only, and not limitation. Thus, the
breadth and scope of the present invention should not be limited by
any of the above-described exemplary embodiments.
[0059] Additionally, while the processes described above and
illustrated in the drawings are shown as a sequence of steps, this
was done solely for the sake of illustration. Accordingly, it is
contemplated that some steps may be added, some steps may be
omitted, the order of the steps may be re-arranged, and some steps
may be performed in parallel.
* * * * *