U.S. patent application number 13/821357 was filed with the patent office on 2013-07-04 for encoding of a picture in a video sequence by example-based data pruning using intra-frame patch similarity.
This patent application is currently assigned to Thomson Licensing. The applicant listed for this patent is Sitaram Bhagavathy, Shan He, Dong-Qing Zhang. Invention is credited to Sitaram Bhagavathy, Shan He, Dong-Qing Zhang.
Application Number | 20130170564 13/821357 |
Document ID | / |
Family ID | 44652035 |
Filed Date | 2013-07-04 |
United States Patent
Application |
20130170564 |
Kind Code |
A1 |
Zhang; Dong-Qing ; et
al. |
July 4, 2013 |
ENCODING OF A PICTURE IN A VIDEO SEQUENCE BY EXAMPLE-BASED DATA
PRUNING USING INTRA-FRAME PATCH SIMILARITY
Abstract
Method and apparatus for encoding a picture in video sequence
are disclosed. An apparatus includes a library creator for creating
a first library from an original version of the picture and a
second library from a reconstructed version of the picture. Each
library includes high resolution replacement patches for replacing
pruned blocks during a recovery of a pruned version of the picture.
A pruner generates the pruned version from the first library. A
metadata generator generates metadata from the second library for
recovering the pruned version. An encoder encodes the pruned
version and metadata. The first library includes patch clusters.
The pruned version is generated by dividing the original version
into overlapping blocks and searching for candidate patch clusters
for each block. A patch dependency graph having nodes and edges is
used for the searching. Each node represents a respective block,
and each edge represents a respective dependency of the respective
block.
Inventors: |
Zhang; Dong-Qing;
(Bridgewater, NJ) ; Bhagavathy; Sitaram; (Palo
Alto, CA) ; He; Shan; (Suwanee, GA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Zhang; Dong-Qing
Bhagavathy; Sitaram
He; Shan |
Bridgewater
Palo Alto
Suwanee |
NJ
CA
GA |
US
US
US |
|
|
Assignee: |
Thomson Licensing
|
Family ID: |
44652035 |
Appl. No.: |
13/821357 |
Filed: |
September 9, 2011 |
PCT Filed: |
September 9, 2011 |
PCT NO: |
PCT/US11/50923 |
371 Date: |
March 7, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61403107 |
Sep 10, 2010 |
|
|
|
Current U.S.
Class: |
375/240.26 |
Current CPC
Class: |
H04N 19/593 20141101;
H04N 19/176 20141101; G06T 5/001 20130101; H04N 19/14 20141101;
H04N 19/85 20141101; H04N 19/587 20141101; H04N 19/46 20141101;
H04N 19/44 20141101; H04N 19/61 20141101; H04N 19/132 20141101;
H04N 19/97 20141101; H04N 19/59 20141101; H04N 19/463 20141101;
H04N 19/196 20141101 |
Class at
Publication: |
375/240.26 |
International
Class: |
H04N 7/26 20060101
H04N007/26 |
Claims
1. An apparatus for encoding a picture in a video sequence,
comprising: a patch library creator for creating a first patch
library from an original version of said picture and a second patch
library from a reconstructed version of said picture, each of said
first patch library and said second patch library including a
plurality of high resolution replacement patches for replacing one
or more pruned blocks during a recovery of a pruned version of said
picture; and a pruner for generating said pruned version of said
picture from said first patch library; a metadata generator for
generating metadata from said second patch library, said metadata
for recovering said pruned version of said picture; and an encoder
for encoding said pruned version of said picture and said metadata,
wherein said first patch library includes a plurality of patch
clusters, and said pruned version of said picture is generated by
dividing said original version of said picture into a plurality of
overlapping blocks, searching for candidate patch clusters from
among said plurality of patch clusters for each of said plurality
of overlapping blocks based on respective distance metrics from
each of said plurality of overlapping blocks to respective centers
of each of said plurality of patch clusters, identifying a best
matching patch from said candidate patch clusters based on one or
more criterion, and pruning a corresponding one of said plurality
of overlapping blocks to obtain a pruned block there for when a
difference between said corresponding one of said plurality of
overlapping blocks and said best matching patch is less than a
threshold difference, and wherein a patch dependency graph having a
plurality of nodes and a plurality of edges is used for said
searching, each of said plurality of nodes representing a
respective one of said plurality of overlapping blocks, and each of
said plurality of edges representing a respective dependency of at
least said respective one of said plurality of overlapping
blocks.
2. The apparatus of claim 1, wherein said pruned version of said
picture is generated by dividing said original version of said
picture into a plurality of blocks, and respectively replacing at
least one of said plurality of blocks with a replacement patch,
wherein all pixels in said replacement patch have one of a same
color value or a low resolution.
3. The apparatus of claim 2, wherein said same color value is equal
to an average of color values of said pixels within said at least
one of said plurality of blocks.
4. The apparatus of claim 1, wherein said first patch library is
created by dividing said original version of said picture into a
plurality of overlapping blocks to form a training data set,
removing any of said plurality of overlapping blocks from said
training set having a high frequency component above a
pre-specified threshold, and clustering remaining ones of said
plurality of overlapping blocks into a plurality of clusters,
wherein each of said remaining ones of said plurality of
overlapping blocks form a respective one of said plurality of high
resolution replacement patches.
5. The apparatus of claim 4, wherein a respective center of a
respective one of said plurality of clusters corresponds to an
average of any of said remaining ones of said plurality of
overlapping blocks included in said respective one of said
plurality of clusters.
6. The apparatus of claim 5, wherein said remaining ones of said
plurality of overlapping blocks are downsized prior to said
clustering to obtain a plurality of downsized overlapping blocks,
said clustering is performed on said plurality downsized
overlapping blocks, and said respective center of said respective
one of said plurality of clusters corresponds to said average of
any of said plurality of downsized overlapping blocks included in
said respective one of said plurality of clusters.
7. The apparatus of claim 1, wherein a signature is respectively
created for each of said plurality of high resolution patches
included in said second patch library by generating a feature
vector there for that includes an average color for a respective
one of said plurality of high resolution patches.
8. The apparatus of claim 7, wherein said average color included in
said feature vector for said respective one of said plurality of
high resolution patches is further of surrounding pixels with
respect to said respective one of said plurality of high resolution
patches.
9. The apparatus of claim 1, wherein only patches preceding a
co-located patch with respect to said corresponding one of said
plurality of overlapping blocks are used for said searching.
10. The apparatus of claim 1, wherein said metadata comprises a
patch index for said best matching patch when said difference
between said corresponding one of said plurality of overlapping
blocks and said best matching patch is less than said threshold
difference, said metadata further comprising a block identifier for
said pruned block.
11. A method for encoding a picture in a video sequence,
comprising: creating a first patch library from an original version
of said picture and a second patch library from a reconstructed
version of said picture, each of said first patch library and said
second patch library including a plurality of high resolution
replacement patches for replacing one or more pruned blocks during
a recovery of a pruned version of said picture; and generating said
pruned version of said picture from said first patch library;
generating metadata from said second patch library, said metadata
for recovering said pruned version of said picture; and encoding
said pruned version of said picture and said metadata, wherein said
first patch library includes a plurality of patch clusters, and
said pruned version of said picture is generated by dividing said
original version of said picture into a plurality of overlapping
blocks, searching for candidate patch clusters from among said
plurality of patch clusters for each of said plurality of
overlapping blocks based on respective distance metrics from each
of said plurality of overlapping blocks to respective centers of
each of said plurality of patch clusters, identifying a best
matching patch from said candidate patch clusters based on one or
more criterion, and pruning a corresponding one of said plurality
of overlapping blocks to obtain a pruned block there for when a
difference between said corresponding one of said plurality of
overlapping blocks and said best matching patch is less than a
threshold difference, and wherein a patch dependency graph having a
plurality of nodes and a plurality of edges is used for said
searching, each of said plurality of nodes representing a
respective one of said plurality of overlapping blocks, and each of
said plurality of edges representing a respective dependency of at
least said respective one of said plurality of overlapping
blocks.
12. The method of claim 11, wherein said pruned version of said
picture is generated by dividing said original version of said
picture into a plurality of blocks, and respectively replacing at
least one of said plurality of blocks with a replacement patch,
wherein all pixels in said replacement patch have one of a same
color value or a low resolution.
13. The method of claim 12, wherein said same color value is equal
to an average of color values of said pixels within said at least
one of said plurality of blocks.
14. The method of claim 11, wherein said first patch library is
created by dividing said original version of said picture into a
plurality of overlapping blocks to form a training data set,
removing any of said plurality of overlapping blocks from said
training set having a high frequency component above a
pre-specified threshold, and clustering remaining ones of said
plurality of overlapping blocks into a plurality of clusters,
wherein each of said remaining ones of said plurality of
overlapping blocks form a respective one of said plurality of high
resolution replacement patches.
15. The method of claim 14, wherein a respective center of a
respective one of said plurality of clusters corresponds to an
average of any of said remaining ones of said plurality of
overlapping blocks included in said respective one of said
plurality of clusters.
16. The method of claim 14, wherein said remaining ones of said
plurality of overlapping blocks are downsized prior to said
clustering to obtain a plurality of downsized overlapping blocks,
said clustering is performed on said plurality of downsized
overlapping blocks, and said respective center of said respective
one of said plurality of clusters corresponds to said average of
any of said plurality of downsized overlapping blocks included in
said respective one of said plurality of clusters.
17. The method of claim 11, wherein a signature is respectively
created for each of said plurality of high resolution patches
included in said second patch library by generating a feature
vector there for that includes an average color for a respective
one of said plurality of high resolution patches.
18. The method of claim 17, wherein said average color included in
said feature vector for said respective one of said plurality of
high resolution patches is further of surrounding pixels with
respect to said respective one of said plurality of high resolution
patches.
19. The method of claim 11, wherein only patches preceding a
co-located patch with respect to said corresponding one of said
plurality of overlapping blocks are used for said searching.
20. The method of claim 11, wherein said metadata comprises a patch
index for said best matching patch when said difference between
said corresponding one of said plurality of overlapping blocks and
said best matching patch is less than said threshold difference,
said metadata further comprising a block identifier for said pruned
block.
21. An apparatus for encoding a picture in a video sequence,
comprising: means for creating a first patch library from an
original version of said picture and a second patch library from a
reconstructed version of said picture, each of said first patch
library and said second patch library including a plurality of high
resolution replacement patches for replacing one or more pruned
blocks during a recovery of a pruned version of said picture; means
for generating said pruned version of said picture from said first
patch library; means for generating metadata from said second patch
library, said metadata for recovering said pruned version of said
picture; and means for encoding said pruned version of said picture
and said metadata, wherein a patch dependency graph having a
plurality of nodes and a plurality of edges is used for said
searching, each of said plurality of nodes representing a
respective one of said plurality of overlapping blocks, and each of
said plurality of edges representing a respective dependency of at
least said respective one of said plurality of overlapping blocks,
wherein said first patch library includes a plurality of patch
clusters, and said pruned version of said picture is generated by
dividing said original version of said picture into a plurality of
overlapping blocks, searching for candidate patch clusters from
among said plurality of patch clusters for each of said plurality
of overlapping blocks based on respective distance metrics from
each of said plurality of overlapping blocks to respective centers
of each of said plurality of patch clusters, identifying a best
matching patch from said candidate patch clusters based on one or
more criterion, and pruning a corresponding one of said plurality
of overlapping blocks to obtain a pruned block there for when a
difference between said corresponding one of said plurality of
overlapping blocks and said best matching patch is less than a
threshold difference, and wherein a patch dependency graph having a
plurality of nodes and a plurality of edges is used for said
searching, each of said plurality of nodes representing a
respective one of said plurality of overlapping blocks, and each of
said plurality of edges representing a respective dependency of at
least said respective one of said plurality of overlapping
blocks.
22. The apparatus of claim 21, wherein said pruned version of said
picture is generated by dividing said original version of said
picture into a plurality of blocks, and respectively replacing at
least one of said plurality of blocks with a replacement patch,
wherein all pixels in said replacement patch have one of a same
color value or a low resolution.
23. The apparatus of claim 22, wherein said same color value is
equal to an average of color values of said pixels within said at
least one of said plurality of blocks.
24. The apparatus of claim 21, wherein said first patch library is
created by dividing said original version of said picture into a
plurality of overlapping blocks to form a training data set,
removing any of said plurality of overlapping blocks from said
training set having a high frequency component above a
pre-specified threshold, and clustering remaining ones of said
plurality of overlapping blocks into a plurality of clusters,
wherein each of said remaining ones of said plurality of
overlapping blocks form a respective one of said plurality of high
resolution replacement patches.
25. The apparatus of claim 24, wherein a respective center of a
respective one of said plurality of clusters corresponds to an
average of any of said remaining ones of said plurality of
overlapping blocks included in said respective one of said
plurality of clusters.
26. The apparatus of claim 25, wherein said remaining ones of said
plurality of overlapping blocks are downsized prior to said
clustering to obtain a plurality of downsized overlapping blocks,
said clustering is performed on said plurality of downsized
overlapping blocks, and said respective center of said respective
one of said plurality of clusters corresponds to said average of
any of said plurality of downsized overlapping blocks included in
said respective one of said plurality of clusters.
27. The apparatus of claim 21, wherein a signature is respectively
created for each of said plurality of high resolution patches
included in said second patch library by generating a feature
vector there for that includes an average color for a respective
one of said plurality of high resolution patches.
28. The apparatus of claim 27, wherein said average color included
in said feature vector for said respective one of said plurality of
high resolution patches is further of surrounding pixels with
respect to said respective one of said plurality of high resolution
patches.
29. The apparatus of claim 21, wherein only patches preceding a
co-located patch with respect to said corresponding one of said
plurality of overlapping blocks are used for said searching.
30. The apparatus of claim 21, wherein said metadata comprises a
patch index for said best matching patch when said difference
between said corresponding one of said plurality of overlapping
blocks and said best matching patch is less than said threshold
difference, said metadata further comprising a block identifier for
said pruned block.
Description
[0001] This application claims the benefit of U.S. Provisional
Application Ser. No. 61/403,107 entitled EXAMPLE-BASED DATA PRUNING
USING INTRA-FRAME PATCH SIMILARITY filed on Sep. 10, 2010
(Technicolor Docket No. PU100196).
[0002] This application is related to the following co-pending,
commonly-owned, patent applications: [0003] (1) International (PCT)
Patent Application Serial No. PCT/US11/000107 entitled A
SAMPLING-BASED SUPER-RESOLUTION APPROACH FOR EFFICIENT VIDEO
COMPRESSION filed on Jan. 20, 2011 (Technicolor Docket No.
PU100004); [0004] (2) International (PCT) Patent Application Serial
No. PCT/US11/000117 entitled DATA PRUNING FOR VIDEO COMPRESSION
USING EXAMPLE-BASED SUPER-RESOLUTION filed on Jan. 21, 2011
(Technicolor Docket No. PU100014); [0005] (3) International (PCT)
Patent Application Serial No. XXXX entitled METHODS AND APPARATUS
FOR ENCODING VIDEO SIGNALS USING MOTION COMPENSATED EXAMPLE-BASED
SUPER-RESOLUTION FOR VIDEO COMPRESSION filed on Sep. XX, 2011
(Technicolor Docket No. PU100190); [0006] (4) International (PCT)
Patent Application Serial No. XXXX entitled METHODS AND APPARATUS
FOR DECODING VIDEO SIGNALS USING MOTION COMPENSATED EXAMPLE-BASED
SUPER-RESOLUTION FOR VIDEO COMPRESSION filed on Sep. XX, 2011
(Technicolor Docket No. PU100266); [0007] (5) International (PCT)
Patent Application Serial No. XXXX entitled METHODS AND APPARATUS
FOR ENCODING VIDEO SIGNALS USING EXAMPLE-BASED DATA PRUNING FOR
IMPROVED VIDEO COMPRESSION EFFICIENCY filed on Sep. XX, 2011
(Technicolor Docket No. PU100193); [0008] (6) International (PCT)
Patent Application Serial No. XXXX entitled METHODS AND APPARATUS
FOR DECODING VIDEO SIGNALS USING EXAMPLE-BASED DATA PRUNING FOR
IMPROVED VIDEO COMPRESSION EFFICIENCY filed on Sep. XX, 2011
(Technicolor Docket No. PU100267); [0009] (7) International (PCT)
Patent Application Serial No. XXXX entitled METHODS AND APPARATUS
FOR ENCODING VIDEO SIGNALS FOR BLOCK-BASED MIXED-RESOLUTION DATA
PRUNING filed on Sep. XX, 2011 (Technicolor Docket No. PU100194);
[0010] (8) International (PCT) Patent Application Serial No. XXXX
entitled METHODS AND APPARATUS FOR DECODING VIDEO SIGNALS FOR
BLOCK-BASED MIXED-RESOLUTION DATA PRUNING filed on Sep. XX, 2011
(Technicolor Docket No. PU100268); [0011] (9) International (PCT)
Patent Application Serial No. XXXX entitled METHODS AND APPARATUS
FOR EFFICIENT REFERENCE DATA ENCODING FOR VIDEO COMPRESSION BY
IMAGE CONTENT BASED SEARCH AND RANKING filed on Sep. XX, 2011
(Technicolor Docket No. PU100195); [0012] (10) International (PCT)
Patent Application Serial No. XXXX entitled METHOD AND APPARATUS
FOR EFFICIENT REFERENCE DATA DECODING FOR VIDEO COMPRESSION BY
IMAGE CONTENT BASED SEARCH AND RANKING filed on Sep. XX, 2011
(Technicolor Docket No. PU110106); [0013] (11) International (PCT)
Patent Application Serial No. XXXX entitled METHOD AND APPARATUS
FOR DECODING VIDEO SIGNALS WITH EXAMPLE-BASED DATA PRUNING USING
INTRA-FRAME PATCH SIMILARITY filed on Sep. XX, 2011 (Technicolor
Docket No. PU100269); [0014] (12) International (PCT) Patent
Application Serial No. XXXX entitled PRUNING DECISION OPTIMIZATION
IN EXAMPLE-BASED DATA PRUNING COMPRESSION filed on Sep. XX, 2011
(Technicolor Docket No. PU10197).
[0015] The present principles relate generally to video encoding
and decoding and, more particularly, to methods and apparatus for
example-based data pruning using intra-frame patch similarity.
[0016] Data pruning is a video preprocessing technology that
achieves better video coding efficiency by removing part of the
input video data before the input video data is encoded. The
removed video data is recovered at the decoder side by inferring
the removed video data from the decoded data. One example of data
pruning is image line removal, which removes some of the horizontal
and vertical scan lines in the input video.
[0017] In a first approach, a new data pruning method called
example-based data pruning is employed, in which external videos or
video frames that have been previously transmitted to the decoder
side are used to train an example patch library. The patch library
is then used to prune and recover the video data.
[0018] There have been several efforts to explore using data
pruning to increase compression efficiency. For example, in a
second approach and a third approach, a texture replacement based
method is used to remove texture regions at the encoder side, and
re-synthesize the texture regions at the decoder side. Compression
efficiency is gained because only synthesis parameters are sent to
the decoder, which are smaller than the regular transformation
coefficients. In a fourth approach and a fifth approach,
spatio-temporal texture synthesis and edge-based inpainting are
used to remove some of the regions at the encoder side, and the
removed content is recovered at the decoder side, with the help of
metadata such as region masks. However, the fourth and fifth
approaches need to modify the encoder and decoder so that the
encoder/decoder can selectively perform encoding/decoding for some
of the regions using the region masks. Therefore, it is not exactly
an out-of-loop approach (i.e., the encoder and decoder need to be
modified). In a sixth approach, a line removal based method is
proposed to rescale a video to a smaller size by selectively
removing some of the horizontal or vertical lines in the video with
a least-square minimization framework. The sixth approach is an
out-of-loop approach, and does not require modification of the
encoder/decoder. However, completely removing certain horizontal
and vertical lines may result in loss of information or details for
some videos.
[0019] Some preliminary research on data pruning for video
compression has been conducted. For example, in a seventh approach,
a data pruning scheme using sampling-based super-resolution is
presented. The full resolution frame is sampled into several
smaller-sized frames, therefore reducing the spatial size of the
original video. At the decoder side, the high-resolution frame is
re-synthesized from the downsampled frames with the help of
metadata received from the encoder side. In an eighth approach, an
example-based super-resolution based method for data pruning is
presented. A representative patch library is trained from the
original video. Afterwards, the video is downsized to a smaller
size. The downsized video and the patch library are sent to the
decoder side. The recovery process at the decoder side
super-resolves the downsized video by example-based
super-resolution using the patch library. However because there is
substantial redundancy between the patch library and downsized
frames, it has been discovered that it may be difficult to achieve
compression gain using the eighth approach.
[0020] In the aforementioned first approach, an example-based data
pruning method creates a patch library using the video frames that
have been sent to the decoder side and uses the patch library to
prune and recover video frames. However, this method does not
consider the intra-frame patch dependency, which may happen if
there are repetitive textures or patterns in a video frame.
[0021] In the International Organization for
Standardization/International Electrotechnical Commission (ISO/IEC)
Moving Picture Experts Group-4 (MPEG-4) Part 10 Advanced Video
Coding (AVC) Standard/International Telecommunication Union,
Telecommunication Sector (ITU-T) H.264 Recommendation (hereinafter
the "MPEG-4 AVC Standard"), intra-frame block prediction is
realized by block prediction from the neighboring blocks. However,
long-range similarity of non-neighboring blocks is not exploited to
increase compression efficiency.
[0022] These and other drawbacks and disadvantages of these
approaches are addressed by the present principles, which are
directed to methods and apparatus for example-based data pruning
using intra-frame patch similarity.
[0023] According to an aspect of the present principles, there is
provided an apparatus for encoding a picture in a video sequence.
The apparatus includes a patch library creator for creating a first
patch library from an original version of the picture and a second
patch library from a reconstructed version of the picture. Each of
the first patch library and the second patch library includes a
plurality of high resolution replacement patches for replacing one
or more pruned blocks during a recovery of a pruned version of the
picture. The apparatus also includes a pruner for generating the
pruned version of the picture from the first patch library. The
apparatus further includes a metadata generator for generating
metadata from the second patch library. The metadata is for
recovering the pruned version of the picture. The apparatus
additionally includes an encoder for encoding the pruned version of
the picture and the metadata. The first patch library includes a
plurality of patch clusters, and the pruned version of the picture
is generated by dividing the original version of the picture into a
plurality of overlapping blocks, searching for candidate patch
clusters from among the plurality of patch clusters for each of the
plurality of overlapping blocks based on respective distance
metrics from each of the plurality of overlapping blocks to
respective centers of each of the plurality of patch clusters,
identifying a best matching patch from the candidate patch clusters
based on one or more criterion, and pruning a corresponding one of
the plurality of overlapping blocks to obtain a pruned block there
for when a difference between the corresponding one of the
plurality of overlapping blocks and the best matching patch is less
than a threshold difference. A patch dependency graph having a
plurality of nodes and a plurality of edges is used for the
searching. Each of the plurality of nodes represents a respective
one of the plurality of overlapping blocks, and each of the
plurality of edges represents a respective dependency of at least
the respective one of the plurality of overlapping blocks.
[0024] According to another aspect of the present principles, there
is provided a method for encoding a picture in a video sequence.
The method includes creating a first patch library from an original
version of the picture and a second patch library from a
reconstructed version of the picture. Each of the first patch
library and the second patch library includes a plurality of high
resolution replacement patches for replacing one or more pruned
blocks during a recovery of a pruned version of the picture. The
method also includes generating the pruned version of the picture
from the first patch library. The method further includes
generating metadata from the second patch library. The metadata is
for recovering the pruned version of the picture. The method
additionally includes encoding the pruned version of the picture
and the metadata. The first patch library includes a plurality of
patch clusters, and the pruned version of the picture is generated
by dividing the original version of the picture into a plurality of
overlapping blocks, searching for candidate patch clusters from
among the plurality of patch clusters for each of the plurality of
overlapping blocks based on respective distance metrics from each
of the plurality of overlapping blocks to respective centers of
each of the plurality of patch clusters, identifying a best
matching patch from the candidate patch clusters based on one or
more criterion, and pruning a corresponding one of the plurality of
overlapping blocks to obtain a pruned block there for when a
difference between the corresponding one of the plurality of
overlapping blocks and the best matching patch is less than a
threshold difference. A patch dependency graph having a plurality
of nodes and a plurality of edges is used for the searching. Each
of the plurality of nodes represents a respective one of the
plurality of overlapping blocks, and each of the plurality of edges
represents a respective dependency of at least the respective one
of the plurality of overlapping blocks.
[0025] According to still another aspect of the present principles,
there is provided an apparatus for recovering a pruned version of a
picture in a video sequence. The apparatus includes a divider for
dividing the pruned version of the picture into a plurality of
non-overlapping blocks. The apparatus also includes a metadata
decoder for decoding metadata for use in recovering the pruned
version of the picture. The apparatus further includes a patch
library creator for creating a patch library from a reconstructed
version of the picture. The patch library includes a plurality of
high resolution replacement patches for replacing the one or more
pruned blocks during a recovery of the pruned version of the
picture. The apparatus additionally includes a search and
replacement device for performing a searching process using the
metadata to find a corresponding patch for a respective one of the
one or more pruned blocks from among the plurality of
non-overlapping blocks and replace the respective one of the one or
more pruned blocks with the corresponding patch. The signature is
respectively created for each of the one or more pruned blocks, and
the pruned version of the picture is recovered by comparing
respective distance metrics from signatures for each of the
plurality of high resolution patches to signatures for each of the
one or more pruned blocks, sorting the respective distance metrics
to obtain a rank list for each of the one or more pruned blocks,
wherein a rank number in the rank list for a particular one of the
one or more pruned blocks is used to retrieve a corresponding one
of the plurality of high resolution patches in the patch library to
be used to replace the particular one of the one or more pruned
blocks. A patch dependency graph having a plurality of nodes and a
plurality of edges is used to recover the pruned version of the
picture. Each of the plurality of nodes represents a respective one
of the plurality of overlapping blocks, and each of the plurality
of edges represents a respective dependency of at least the
respective one of the plurality of overlapping blocks.
[0026] According to a further aspect of the present principles,
there is provided a method for recovering a pruned version of a
picture in a video sequence. The method includes dividing the
pruned version of the picture into a plurality of non-overlapping
blocks. The method also includes decoding metadata for use in
recovering the pruned version of the picture. The method further
includes creating a patch library from a reconstructed version of
the picture. The patch library includes a plurality of high
resolution replacement patches for replacing the one or more pruned
blocks during a recovery of the pruned version of the picture. The
method additionally includes performing a searching process using
the metadata to find a corresponding patch for a respective one of
the one or more pruned blocks from among the plurality of
non-overlapping blocks and replace the respective one of the one or
more pruned blocks with the corresponding patch. The signature is
respectively created for each of the one or more pruned blocks, and
the pruned version of the picture is recovered by comparing
respective distance metrics from signatures for each of the
plurality of high resolution patches to signatures for each of the
one or more pruned blocks, sorting the respective distance metrics
to obtain a rank list for each of the one or more pruned blocks,
wherein a rank number in the rank list for a particular one of the
one or more pruned blocks is used to retrieve a corresponding one
of the plurality of high resolution patches in the patch library to
be used to replace the particular one of the one or more pruned
blocks. A patch dependency graph having a plurality of nodes and a
plurality of edges is used to recover the pruned version of the
picture. Each of the plurality of nodes represents a respective one
of the plurality of overlapping blocks, and each of the plurality
of edges represents a respective dependency of at least the
respective one of the plurality of overlapping blocks.
[0027] According to a still further aspect of the present
principles, there is provided an apparatus for encoding a picture
in a video sequence. The apparatus includes means for creating a
first patch library from an original version of the picture and a
second patch library from a reconstructed version of the picture.
Each of the first patch library and the second patch library
includes a plurality of high resolution replacement patches for
replacing one or more pruned blocks during a recovery of a pruned
version of the picture. The apparatus also includes means for
generating the pruned version of the picture from the first patch
library. The apparatus further includes means for generating
metadata from the second patch library, the metadata for recovering
the pruned version of the picture. The apparatus additionally
includes means for encoding the pruned version of the picture and
the metadata. The first patch library includes a plurality of patch
clusters, and the pruned version of the picture is generated by
dividing the original version of the picture into a plurality of
overlapping blocks, searching for candidate patch clusters from
among the plurality of patch clusters for each of the plurality of
overlapping blocks based on respective distance metrics from each
of the plurality of overlapping blocks to respective centers of
each of the plurality of patch clusters, identifying a best
matching patch from the candidate patch clusters based on one or
more criterion, and pruning a corresponding one of the plurality of
overlapping blocks to obtain a pruned block there for when a
difference between the corresponding one of the plurality of
overlapping blocks and the best matching patch is less than a
threshold difference. A patch dependency graph having a plurality
of nodes and a plurality of edges is used for the searching. Each
of the plurality of nodes represents a respective one of the
plurality of overlapping blocks, and each of the plurality of edges
represents a respective dependency of at least the respective one
of the plurality of overlapping blocks.
[0028] According to an additional aspect of the present principles,
there is provided an apparatus for recovering a pruned version of a
picture in a video sequence. The apparatus includes means for
dividing the pruned version of the picture into a plurality of
non-overlapping blocks. The apparatus also includes means for
decoding metadata for use in recovering the pruned version of the
picture. The apparatus further includes means for creating a patch
library from a reconstructed version of the picture. The patch
library includes a plurality of high resolution replacement patches
for replacing the one or more pruned blocks during a recovery of
the pruned version of the picture. The apparatus additionally
includes means for performing a searching process using the
metadata to find a corresponding patch for a respective one of the
one or more pruned blocks from among the plurality of
non-overlapping blocks and replace the respective one of the one or
more pruned blocks with the corresponding patch. The signature is
respectively created for each of the one or more pruned blocks, and
the pruned version of the picture is recovered by comparing
respective distance metrics from signatures for each of the
plurality of high resolution patches to signatures for each of the
one or more pruned blocks, sorting the respective distance metrics
to obtain a rank list for each of the one or more pruned blocks,
wherein a rank number in the rank list for a particular one of the
one or more pruned blocks is used to retrieve a corresponding one
of the plurality of high resolution patches in the patch library to
be used to replace the particular one of the one or more pruned
blocks. A patch dependency graph having a plurality of nodes and a
plurality of edges is used to recover the pruned version of the
picture. Each of the plurality of nodes represents a respective one
of the plurality of overlapping blocks, and each of the plurality
of edges represents a respective dependency of at least the
respective one of the plurality of overlapping blocks.
[0029] These and other aspects, features and advantages of the
present principles will become apparent from the following detailed
description of exemplary embodiments, which is to be read in
connection with the accompanying drawings.
[0030] The present principles may be better understood in
accordance with the following exemplary figures, in which:
[0031] FIG. 1 is a block diagram showing an exemplary example-based
data pruning system using intra-frame patch similarity, in
accordance with an embodiment of the present principles;
[0032] FIG. 2 is a block diagram showing an exemplary video encoder
to which the present principles may be applied, in accordance with
an embodiment of the present principles;
[0033] FIG. 3 is a block diagram showing an exemplary video decoder
to which the present principles may be applied, in accordance with
an embodiment of the present principles;
[0034] FIG. 4 is a block diagram showing an exemplary first portion
for performing encoder side processing in an example-based data
pruning system using intra-frame patch similarity, in accordance
with an embodiment of the present principles;
[0035] FIG. 5 is a block diagram showing an exemplary method for
clustering and patch library creation, in accordance with an
embodiment of the present principles;
[0036] FIG. 6 is a block diagram showing an exemplary patch library
and corresponding clusters, in accordance with an embodiment of the
present principles;
[0037] FIG. 7 is a diagram showing an exemplary signature vector,
in accordance with an embodiment of the present principles;
[0038] FIG. 8 is a block diagram showing an exemplary second
portion for performing encoder side processing in an example-based
data pruning system using intra-frame patch similarity, in
accordance with an embodiment of the present principles;
[0039] FIG. 9 is a flow diagram showing an exemplary method for
video frame pruning, in accordance with an embodiment of the
present principles;
[0040] FIG. 10 is a block diagram showing a patch search process,
in accordance with an embodiment of the present principles;
[0041] FIG. 11 is a diagram showing a block search area in a
process of pruning for causal recovery, in accordance with an
embodiment of the present principles;
[0042] FIG. 12 is a diagram showing an exemplary block dependency
graph, in accordance with an embodiment of the present
principles;
[0043] FIG. 13 is a flow diagram showing an exemplary method for
obtaining a recovery sequence, in accordance with an embodiment of
the present principles;
[0044] FIG. 14 is a diagram showing an exemplary evolution of the
dependency graph using the pruning algorithm, in accordance with an
embodiment of the present principles;
[0045] FIG. 15 is a diagram showing an exemplary mixed-resolution
frame, in accordance with an embodiment of the present
principles;
[0046] FIG. 16 is a flow diagram showing an exemplary method for
encoding metadata, in accordance with an embodiment of the present
principles;
[0047] FIG. 17 is a flow diagram showing an example method for
encoding pruned block IDs, in accordance with an embodiment of the
present principles;
[0048] FIG. 18 is a flow diagram showing an exemplary method for
encoding a patch index, in accordance with an embodiment of the
present principles;
[0049] FIG. 19 is a flow diagram showing an exemplary method for
decoding a patch index, in accordance with an embodiment of the
present principles;
[0050] FIG. 20 is a diagram showing an exemplary block ID, in
accordance with an embodiment of the present principles;
[0051] FIG. 21 is a flow diagram showing an exemplary method for
pruning sequent frames, in accordance with an embodiment of the
present principles;
[0052] FIG. 22 is a diagram showing an exemplary motion vector for
a pruned block, in accordance with an embodiment of the present
principles;
[0053] FIG. 23 is a flow diagram showing an exemplary method for
decoding metadata, in accordance with an embodiment of the present
principles;
[0054] FIG. 24 is a flow diagram showing an exemplary method for
decoding pruned block IDs, in accordance with an embodiment of the
present principles;
[0055] FIG. 25 is a block diagram showing an exemplary apparatus
for performing decoder side processing for example-based data
pruning using intra-frame patch similarity, in accordance with an
embodiment of the present principles;
[0056] FIG. 26 is a flow diagram showing an exemplary method for
recovering a pruned frame, in accordance with an embodiment of the
present principles; and
[0057] FIG. 27 is a flow diagram showing an exemplary method for
recovering subsequent frames, in accordance with an embodiment of
the present principles.
[0058] The present principles are directed to methods and apparatus
for example-based data pruning using intra-frame patch
similarity.
[0059] The present description illustrates the present principles.
It will thus be appreciated that those skilled in the art will be
able to devise various arrangements that, although not explicitly
described or shown herein, embody the present principles and are
included within its spirit and scope.
[0060] All examples and conditional language recited herein are
intended for pedagogical purposes to aid the reader in
understanding the present principles and the concepts contributed
by the inventor(s) to furthering the art, and are to be construed
as being without limitation to such specifically recited examples
and conditions.
[0061] Moreover, all statements herein reciting principles,
aspects, and embodiments of the present principles, as well as
specific examples thereof, are intended to encompass both
structural and functional equivalents thereof. Additionally, it is
intended that such equivalents include both currently known
equivalents as well as equivalents developed in the future, i.e.,
any elements developed that perform the same function, regardless
of structure.
[0062] Thus, for example, it will be appreciated by those skilled
in the art that the block diagrams presented herein represent
conceptual views of illustrative circuitry embodying the present
principles. Similarly, it will be appreciated that any flow charts,
flow diagrams, state transition diagrams, pseudocode, and the like
represent various processes which may be substantially represented
in computer readable media and so executed by a computer or
processor, whether or not such computer or processor is explicitly
shown.
[0063] The functions of the various elements shown in the figures
may be provided through the use of dedicated hardware as well as
hardware capable of executing software in association with
appropriate software. When provided by a processor, the functions
may be provided by a single dedicated processor, by a single shared
processor, or by a plurality of individual processors, some of
which may be shared. Moreover, explicit use of the term "processor"
or "controller" should not be construed to refer exclusively to
hardware capable of executing software, and may implicitly include,
without limitation, digital signal processor ("DSP") hardware,
read-only memory ("ROM") for storing software, random access memory
("RAM"), and non-volatile storage.
[0064] Other hardware, conventional and/or custom, may also be
included. Similarly, any switches shown in the figures are
conceptual only. Their function may be carried out through the
operation of program logic, through dedicated logic, through the
interaction of program control and dedicated logic, or even
manually, the particular technique being selectable by the
implementer as more specifically understood from the context.
[0065] In the claims hereof, any element expressed as a means for
performing a specified function is intended to encompass any way of
performing that function including, for example, a) a combination
of circuit elements that performs that function or b) software in
any form, including, therefore, firmware, microcode or the like,
combined with appropriate circuitry for executing that software to
perform the function. The present principles as defined by such
claims reside in the fact that the functionalities provided by the
various recited means are combined and brought together in the
manner which the claims call for. It is thus regarded that any
means that can provide those functionalities are equivalent to
those shown herein.
[0066] Reference in the specification to "one embodiment" or "an
embodiment" of the present principles, as well as other variations
thereof, means that a particular feature, structure,
characteristic, and so forth described in connection with the
embodiment is included in at least one embodiment of the present
principles. Thus, the appearances of the phrase "in one embodiment"
or "in an embodiment", as well any other variations, appearing in
various places throughout the specification are not necessarily all
referring to the same embodiment.
[0067] It is to be appreciated that the use of any of the following
"/", "and/or", and "at least one of", for example, in the cases of
"A/B", "A and/or B" and "at least one of A and B", is intended to
encompass the selection of the first listed option (A) only, or the
selection of the second listed option (B) only, or the selection of
both options (A and B). As a further example, in the cases of "A,
B, and/or C" and "at least one of A, B, and C", such phrasing is
intended to encompass the selection of the first listed option (A)
only, or the selection of the second listed option (B) only, or the
selection of the third listed option (C) only, or the selection of
the first and the second listed options (A and B) only, or the
selection of the first and third listed options (A and C) only, or
the selection of the second and third listed options (B and C)
only, or the selection of all three options (A and B and C). This
may be extended, as readily apparent by one of ordinary skill in
this and related arts, for as many items listed.
[0068] Turning to FIG. 1, exemplary example-based data pruning
system using intra-frame patch similarity is indicated generally by
the reference numeral 100. The pruning system 100 includes a pruner
105 having an output connected in signal communication with an
input of a video encoder 110 and a first input of a metadata
generator and encoder 135. An output of the video encoder is
connected in signal communication with an input of a video decoder
115 and an input of a patch library creator 140. An output of the
video decoder 115 is connected in signal communication with a first
input of a recovery device 120. An output of the patch library
creator 130 is connected in signal communication with a second
input of the recovery device 120. An output of the metadata
generator and encoder 135 is connected in signal communication with
an input of a metadata decoder 125. An output of the metadata
decoder 125 is connected in signal communication with a third input
of the recovery device 120. An output of the patch library creator
140 is connected in signal communication with a second input of the
metadata generator and encoder 135. An output of a clustering
device and patch library creator 145 is connected in signal
communication with a second input of the pruner 105. An input of
the pruner 105 and an input of the clustering device and patch
library creator 145 are available as inputs to the pruning system
100, for receiving input video. An output of the recovery device is
available as an output of the pruning system 100, for outputting
video.
[0069] Turning to FIG. 2, an exemplary video encoder to which the
present principles may be applied is indicated generally by the
reference numeral 200. The video encoder 200 includes a frame
ordering buffer 210 having an output in signal communication with a
non-inverting input of a combiner 285. An output of the combiner
285 is connected in signal communication with a first input of a
transformer and quantizer 225. An output of the transformer and
quantizer 225 is connected in signal communication with a first
input of an entropy coder 245 and a first input of an inverse
transformer and inverse quantizer 250. An output of the entropy
coder 245 is connected in signal communication with a first
non-inverting input of a combiner 290. An output of the combiner
190 is connected in signal communication with a first input of an
output buffer 235.
[0070] A first output of an encoder controller 205 is connected in
signal communication with a second input of the frame ordering
buffer 210, a second input of the inverse transformer and inverse
quantizer 250, an input of a picture-type decision module 215, a
first input of a macroblock-type (MB-type) decision module 220, a
second input of an intra prediction module 260, a second input of a
deblocking filter 265, a first input of a motion compensator 270, a
first input of a motion estimator 275, and a second input of a
reference picture buffer 280.
[0071] A second output of the encoder controller 205 is connected
in signal communication with a first input of a Supplemental
Enhancement Information (SEI) inserter 230, a second input of the
transformer and quantizer 225, a second input of the entropy coder
245, a second input of the output buffer 235, and an input of the
Sequence Parameter Set (SPS) and Picture Parameter Set (PPS)
inserter 240.
[0072] An output of the SEI inserter 230 is connected in signal
communication with a second non-inverting input of the combiner
290.
[0073] A first output of the picture-type decision module 215 is
connected in signal communication with a third input of the frame
ordering buffer 210. A second output of the picture-type decision
module 215 is connected in signal communication with a second input
of a macroblock-type decision module 220.
[0074] An output of the Sequence Parameter Set (SPS) and Picture
Parameter Set (PPS) inserter 240 is connected in signal
communication with a third non-inverting input of the combiner
290.
[0075] An output of the inverse quantizer and inverse transformer
250 is connected in signal communication with a first non-inverting
input of a combiner 219. An output of the combiner 219 is connected
in signal communication with a first input of the intra prediction
module 260 and a first input of the deblocking filter 265. An
output of the deblocking filter 265 is connected in signal
communication with a first input of a reference picture buffer 280.
An output of the reference picture buffer 280 is connected in
signal communication with a second input of the motion estimator
275 and a third input of the motion compensator 270. A first output
of the motion estimator 275 is connected in signal communication
with a second input of the motion compensator 270. A second output
of the motion estimator 275 is connected in signal communication
with a third input of the entropy coder 245.
[0076] An output of the motion compensator 270 is connected in
signal communication with a first input of a switch 297. An output
of the intra prediction module 260 is connected in signal
communication with a second input of the switch 297. An output of
the macroblock-type decision module 220 is connected in signal
communication with a third input of the switch 297. The third input
of the switch 297 determines whether or not the "data" input of the
switch (as compared to the control input, i.e., the third input) is
to be provided by the motion compensator 270 or the intra
prediction module 260. The output of the switch 297 is connected in
signal communication with a second non-inverting input of the
combiner 219 and an inverting input of the combiner 285.
[0077] A first input of the frame ordering buffer 210 and an input
of the encoder controller 205 are available as inputs of the
encoder 200, for receiving an input picture. Moreover, a second
input of the Supplemental Enhancement Information (SEI) inserter
230 is available as an input of the encoder 200, for receiving
metadata. An output of the output buffer 235 is available as an
output of the encoder 200, for outputting a bitstream.
[0078] Turning to FIG. 3, an exemplary video decoder to which the
present principles may be applied is indicated generally by the
reference numeral 300. The video decoder 300 includes an input
buffer 310 having an output connected in signal communication with
a first input of an entropy decoder 345. A first output of the
entropy decoder 345 is connected in signal communication with a
first input of an inverse transformer and inverse quantizer 350. An
output of the inverse transformer and inverse quantizer 350 is
connected in signal communication with a second non-inverting input
of a combiner 325. An output of the combiner 325 is connected in
signal communication with a second input of a deblocking filter 365
and a first input of an intra prediction module 360. A second
output of the deblocking filter 365 is connected in signal
communication with a first input of a reference picture buffer 380.
An output of the reference picture buffer 380 is connected in
signal communication with a second input of a motion compensator
370.
[0079] A second output of the entropy decoder 345 is connected in
signal communication with a third input of the motion compensator
370, a first input of the deblocking filter 365, and a third input
of the intra predictor 360. A third output of the entropy decoder
345 is connected in signal communication with an input of a decoder
controller 305. A first output of the decoder controller 305 is
connected in signal communication with a second input of the
entropy decoder 345. A second output of the decoder controller 305
is connected in signal communication with a second input of the
inverse transformer and inverse quantizer 350. A third output of
the decoder controller 305 is connected in signal communication
with a third input of the deblocking filter 365. A fourth output of
the decoder controller 305 is connected in signal communication
with a second input of the intra prediction module 360, a first
input of the motion compensator 370, and a second input of the
reference picture buffer 380.
[0080] An output of the motion compensator 370 is connected in
signal communication with a first input of a switch 397. An output
of the intra prediction module 360 is connected in signal
communication with a second input of the switch 397. An output of
the switch 397 is connected in signal communication with a first
non-inverting input of the combiner 325.
[0081] An input of the input buffer 310 is available as an input of
the decoder 300, for receiving an input bitstream. A first output
of the deblocking filter 365 is available as an output of the
decoder 300, for outputting an output picture.
[0082] As noted above, the present principles are directed to
methods and apparatus for example-based data pruning using
intra-frame patch similarity.
[0083] In accordance with the present principles, this application
discloses a new approach that takes advantage of patch similarity
within a video frame. The patch similarity within an image happens
in many real-world pictures where there are repetitive textures or
patterns in the pictures such as, for example, a picture with wall
papers as the background. The within-picture patch similarity is
discovered by a clustering algorithm, and a patch library is
created for pruning and recovery. However, since the same frame is
used for both creating patch library and for pruning/recovery, the
patch dependency problems have to be resolved in order to ensure
artifact-free recovery.
[0084] The present principles provide an improvement of our
previous approach by training the patch library at the decoder side
using previously sent frames or existing frames, rather than
sending the patch library through one or more communication
channels. Moreover, the data pruning is realized by replacing some
blocks in the input frames with flat regions to create
"mixed-resolution" frames.
[0085] As noted above, in the MPEG-4 AVC Standard, intra-frame
block prediction is realized by block prediction from the
neighboring blocks. However, long-range similarity of
non-neighboring blocks is not exploited to increase compression
efficiency. Advantageously, the present principles provide a method
for pruning an input video so that the input video can be more
efficiently encoded by a video encoder. The present principles take
advantage of the similarity of image patches within a video frame
to further increase the compression efficiency.
[0086] In accordance with the present principles, intra-frame patch
similarity is used to train an example patch library and prune a
video and recover the pruned video. Error-bounded clustering (the
modified K-means clustering) is used for efficient patch searching
in the library. To improve compression efficiency, a
mixed-resolution data pruning scheme is used, where blocks are
replaced by flat blocks to reduce the high-frequency signal.
[0087] The present principles may involve the use of patch
signature matching, a matching rank list, and rank number encoding
to increase the efficiency of metadata (best-match patch position
in library) encoding. Moreover, a method is disclosed for encoding
the block coordinates using the flat block identification based on
color variation.
[0088] Referring back to FIG. 1, one difference between the present
principles and the aforementioned first approach is that the input
video frame for pruning is also used for patch library creation. In
the pruning system 100, the encoder-side processing component can
be considered to include two parts, namely a patch library creation
part and a pruning part. For the encoder side, two patch libraries
are generated at the encoder side, one patch library from the
original frame, the other patch library from the reconstructed
frame (i.e., a pruned, encoded and then decoded frame). The latter
is exactly the same as the patch library created at the encoder
side in that they use exactly the same frame (i.e., the
reconstructed frame) to generate the patch libraries. At the
encoder side, the patch library created using the original frame is
used to prune the blocks, whereas the patch library created using
the reconstructed frame is used to encode metadata. The reason of
using a patch library created from the reconstructed frame is to
make sure the patch libraries for encoding and decoding metadata
are identical at the encoder and decoder side. For the patch
library created using the original frames, a clustering algorithm
is performed to group the patches so that the patch search process
during pruning can be efficiently carried out. Pruning is a process
to modify the source video using the patch library so that less
bits are sent to the decoder side. Pruning is realized by dividing
a video frame into blocks, and replacing some of the blocks with
flat blocks. The pruned frame is then taken as the input for a
video encoder. An example video encoder to which the present
principles may be applied is shown in FIG. 2 described above.
[0089] Referring back to FIG. 1, the decoder-side processing
component of the pruning system 100 can also be considered to
include two parts, namely a patch library creation part and a
recovery part. Patch library creation is a process to create a
patch library that is exactly the same as the library used for
pruning at the encoder side. This is ensured by using exactly the
same frame (i.e., the reconstructed pruned frame) for patch library
creation. The recovery component is a process to recover the pruned
content in the decoded pruned frames sent from the encoder side. An
example video encoder to which the present principles may be
applied is shown in FIG. 3 described above.
Patch Library Creation
[0090] Turning to FIG. 4, an exemplary first portion for performing
encoder side processing in an example-based data pruning system
using intra-frame patch similarity is indicated generally by the
reference numeral 400. The first portion 400 includes a divider 410
having an output in signal communication with an input of a
clustering device 420. An input of the divider is available as an
input to the first portion 400, for receiving the first frame in a
GOP. An output of the clustering device 420 is available as an
output of the first portion 400, for outputting clusters and a
patch library.
[0091] Turning to FIG. 5, an exemplary method for clustering and
patch library creation is indicated generally by the reference
numeral 500. At step 505, a training video frame is input. At step
510, the training video frame is divided (by divider 410) into
overlapping blocks. At step 515, blocks without high-frequency
details are removed (by the clustering device 420). At step 520,
the blocks are clustered (by the clustering device 420). At step
525, clusters and a patch library are output.
[0092] The patch library is a pool of high resolution patches that
can be used to recover pruned image blocks. Turning to FIG. 6, an
exemplary patch library and corresponding clusters are indicated
generally by the reference numeral 600. The patch library is
specifically indicated by the reference numeral 610, and includes a
signature portion 611 and a high resolution patch portion 612. For
the encoder side processing, two patch libraries are generated, one
patch library for pruning, the other patch library for metadata
encoding. The patch library for pruning is generated using the
original frame, whereas the patch library for metadata encoding is
generated using the reconstructed frame. For the patch library for
pruning, the patches in the library are grouped into clusters so
that the pruning search process can be efficiently performed. The
video frames used for library creation are divided into overlapping
blocks to form a training data set. The training data is first
cleaned up by removing all blocks that do not include
high-frequency details. A modified K-means clustering algorithm is
used to group the patches in the training data set into clusters.
For each cluster, the cluster center is the average of the patches
in the cluster, and is used for matching to an incoming query
during the pruning process. The modified K-means clustering
algorithm guarantees that the error between any patch within a
cluster and its cluster center is smaller than a specified
threshold. The modified K-means clustering algorithm could be
replaced by any similar clustering algorithm which ensures the
error bound in the clusters.
[0093] To speed up computation, the horizontal and vertical
dimensions of the training frames are reduced to one quarter of the
original size. Also, the clustering process is performed on the
patches in the downsized frames. In one exemplary embodiment, the
size of the high-resolution patches is 16.times.16 pixels, and the
size of the downsized patches is 4.times.4 pixels. Therefore, the
downsize factor is 4. Of course, other sizes can be used, while
maintaining the spirit of the present principles.
[0094] For the patch library for metadata encoding, the clustering
process and clean-up process are not performed, therefore it
includes all possible patches from the reconstructed frame.
However, for every patch in the patch library created from the
original frames, its corresponding patch can be found in the patch
library created from the reconstructed frame using the coordinates
of the patches. This would make sure that metadata encoding can be
correctly performed. For the decoder side, the same patch library
without clustering is created using the same decoded video frames
for metadata decoding and pruned block recovery.
[0095] For the patch libraries created using decoded frames at both
the encoder and decoder side, another process is conducted to
create the signatures of the patches. The signature of a patch is a
feature vector that includes the average color of the patch and the
surrounding pixels of the patch. The patch signatures are used for
the metadata encoding process to more efficiently encode the
metadata, and used in the recovery process at the decoder side to
find the best-match patch and more reliably recover the pruned
content. Turning to FIG. 7, an exemplary signature vector is
indicated generally by the reference numeral 700. The signature
vector 700 includes an average color 701 and surrounding pixels
702.
[0096] The metadata encoding process is described herein below. In
the pruned frame, sometimes the neighboring blocks of a pruned
block for recovery or metadata encoding are also pruned. Then the
set of surrounding pixels used as the signature for search in the
patch library only includes the pixels from the non-pruned blocks.
If all the neighboring blocks are pruned, then only the average
color 701 is used as the signature. This may end up with bad patch
matches since too little information is used for patch matching,
that is why neighboring non-pruned pixels 702 are important.
Pruning Process
[0097] Similar to standard video encoding algorithms, the input
video frames are divided into Group of Pictures (GOP). The pruning
process is conducted on the first frame of a GOP. The pruning
result is propagated to the rest of the frames in the GOP
afterwards.
Pruning Process for the First Frame in a GOP
[0098] Turning to FIG. 8, an exemplary second portion for
performing encoder side processing in an example-based data pruning
system using intra-frame patch similarity is indicated generally by
the reference numeral 800. The second portion 800 includes a
divider 805 having an output in signal communication with an input
of a patch library searcher 810. An output of the patch library
searcher 810 is connected in signal communication with an input of
a video encoder 815, a first input of a metadata generator 830, and
a first input of a metadata encoder 825. An output of the metadata
generator 830 is connected in signal communication with a second
input of the metadata encoder 825. A first output of the video
encoder 815 is connected in signal communication with a second
input of the metadata generator 830. An input of the divider 805 is
available as an input of the second portion 800, for receiving an
input frame. An output of the video encoder 815 is available as an
output of the second portion 800, for outputting an encoded video
frame. An output of the metadata encoder 825 is available as an
output of the second portion 800, for outputting encoded
metadata.
[0099] Turning to FIG. 9, an exemplary method for pruning a video
frame is indicated generally by the reference numeral 900. At step
905, a video frame is input. At step 910, the video frame is
divided into non-overlapping blocks. At step 915, a loop is
performed for each block. At step 920, a search is performed in the
patch library. At step 925, it is determined whether or not a patch
has been found. If so, then the method proceeds to step 930.
Otherwise, the method returns to step 915. At step 930, the block
is pruned. At step 935, it is determined whether or not all blocks
have been finished. If so, then the method proceeds to step 940.
Otherwise, the method returns to step 915. At step 940, the pruned
frame and corresponding metadata are output.
[0100] Thus, the input frame is first divided into non-overlapping
blocks per step 910. The size of the block is the same as the size
of the macroblock used in the standard compression algorithms, in
our current implementation, 16.times.16 pixels. A search process
then is followed to find the best-match patch in the patch library
per step 920. This search process is illustrated in FIG. 10.
Turning to FIG. 10, a patch search process performing during
pruning is indicated generally by the reference numeral 1000. The
patch search process 1000 involves a patch library 1010 which, in
turn, includes a signature portion 1011 and a high resolution patch
portion 1012. First, the block is matched with the centers of the
clusters by calculating the Euclidean distance, and finding the top
K matched clusters. Currently, K is determined empirically. In
principle, K is determined by the error bound of the clusters. Of
course, other approaches to calculating K may also be used in
accordance with the teachings of the present principles. After the
candidate clusters are indentified, the search process is conducted
within the clusters until the best-match patch is found in the
clusters. If the difference between the best-match patch and the
query block is sufficiently small, the block would be pruned.
Otherwise, the block will be kept intact.
[0101] The preceding approach is different from the aforementioned
first approach, since in the first approach the patches in the
input frames are used to create patch library and recover the
pruned blocks, thus resulting in block dependency problem, meaning
that the needed patches for recovery may not be available when a
block is being recovered. This problem is solved by following two
solutions provided by the present principles: pruning with causal
recovery; and pruning using a dependency graph.
a) Pruning for Causal Recovery
[0102] In the case of pruning with causal recovery, for a pruned
block, the search process (also the recovery process at the decoder
side) will only look at the patches preceding the pruned block in
the coordinates in the patch library. Turning to FIG. 11, a block
search area in a process of pruning for causal recovery is
indicated generally by the reference numeral 1100. The block search
area 1100 includes an example patch 1110 and a candidate block
1120. The patches preceding the pruning block (the candidate block
1120) are within the shaded area in FIG. 11. In recovery, by the
time the pruned block 1120 is being recovered, all blocks within
the shaded area have been recovered, therefore all patches within
the shaded area should be available.
b) Pruning with Dependency Graph
[0103] The limitation of the above approach is that there may be no
example patches available for the blocks at the top of a frame.
This problem may be solved by a full-frame patch search with the
help of a patch dependency graph. The patch dependency graph is a
directed acyclic graph (DAG), where each node of the graph
represents a candidate block for pruning, and each edge represents
the dependency of the blocks. Turning to FIG. 12, an exemplary
block dependency graph is indicated generally by the reference
numeral 1200. In FIG. 12, each circle is a node of the dependency
graph, which represents a block in a video frame. The arrows (edges
of the graph) between the circles indicate the dependencies of the
blocks. If an arrow starts from one circle and points to another,
then the circle that the arrow starts from is dependent on the
circle that the arrow points to. If a circle does not have any
arrow pointing to any other circle, the corresponding block is an
unpruned block that does not need recovery after decoding. The
dependency graph 1200 is created during the block search process.
For a candidate block, after the block search process, if the
best-match patch is found, then an edge that points from the
candidate block to the blocks overlapping with the best-match
patches would be created in the dependency graph. After the search
process is done for all the candidate blocks, a process is carried
out to obtain the recovery sequence of the pruned blocks. Turning
to FIG. 13, an exemplary method for obtaining a recovery sequence
is indicated generally by the reference numeral 1300. At step 1305,
a block dependency graph is input. At step 1310, end nodes are
found in the block dependency graph. At step 1315, block IDs are
saved and end nodes are removed. At step 1320, it is determined
whether or not any end nodes remain. If so, then the method
proceeds to step 1325. Otherwise, the method returns to step 1310.
At step 1325, it is determined whether or not the graph is empty.
If so, then the method proceeds to step 1330. Otherwise, the method
proceeds to step 1335. At step 1330, the block IDs are output and
saved as a recovery sequence. At step 1335, a node with the maximum
indegree is found. At step 1340, the node is removed from the block
dependency graph.
[0104] The method for obtaining a recovery sequence is further
described as follows:
[0105] 1. Find all end nodes (i.e., the nodes that do not depend on
other nodes) in the graph and save the corresponding block
coordinates (here the block coordinates are represented as the
block IDs as shown in FIG. 12) of all the end nodes. After the
block IDs are saved, the corresponding nodes are removed from the
dependency graph. The procedure repeats until all end nodes are
removed. If the dependency graph is empty, the algorithm ends,
otherwise the algorithm goes to step (2).
[0106] 2. Find a node in the graph with maximum indegree, i.e., a
node corresponding to the block upon which depends a maximum number
of other blocks. Remove the node in the graph, do not save the IDs
of the block (i.e., the block will not be pruned). Repeat this
procedure until there are new end nodes emerging in the graph. Then
the algorithm goes back to step (1). The block is not pruned
because the block cannot be recovered using the available pixels
(decoded pixels and recovered pixels) in the frame. On the other
hand, other blocks may depend on this block to recover. Therefore,
in order to prune maximum number of blocks, the block upon which
depends the maximum number of other blocks is found. After the
block is kept unpruned (i.e., removed from the graph), there may be
new end nodes emerging, and then step (1) can be used to prune the
block again.
[0107] Turning to FIG. 14, an exemplary evolution of the dependency
graph using the pruning algorithm is indicated generally by the
reference numeral 1400. The evolution involves the graph before
pruning 1410, the graph after step (1) is performed (1420), and the
graph after step (2) is performed (1430).
[0108] By using the above algorithm, a block recovery sequence
which ensures that the best-matching patch is available will be
obtained when a corresponding block is being recovered during the
recovery process.
[0109] After the blocks are identified for pruning, a process is
conducted to prune the block. There could be different pruning
strategies. For example, replacing the high-resolution blocks with
low-resolution blocks may be one strategy that is used. However, it
is discovered that it is difficult for this approach to achieve
significant compression efficiency gain. Therefore, in the current
system, a high-resolution block is simply replaced with a flat
block, in which all pixels have the same color value, which is the
average of the color values of the pixels within the original
block. The block replacement process creates a video frame where
some parts of the frame have high-resolution and some parts have
low-resolution; therefore, such a frame is called as
mixed-resolution frame. Turning to FIG. 15, an exemplary
mixed-resolution frame is indicated generally by the reference
numeral 1500. Our experiments show that such flat-block replacement
scheme is quite effective to gain compression efficiency. The flat
block replacement scheme could be replaced by a low-resolution
block replacement scheme, where the block for pruning is replaced
by its low-resolution version.
Metadata Encoding and Decoding
[0110] Metadata encoding includes two components (see FIG. 16), one
for encoding pruned block IDs (see FIG. 17), the other for encoding
patch index (see FIG. 18), which are the results of searching patch
library for each block during the pruning process.
[0111] Turning to FIG. 16, an exemplary method for encoding
metadata is indicated generally by the reference numeral 1600. At
step 1605, a decoded pruned video frame, pruned block IDs, and a
patch index for each block are input. At step 1610, pruned block
IDs are encoded. At step 1615, the patch index is encoded. At step
1620, the encoded metadata is output.
[0112] Turning to FIG. 17, an example method for encoding pruned
block IDs is indicated generally by the reference numeral 1700. At
step 1705, a pruned frame and pruned block IDs are input. At step
1710, a low-resolution block identification is performed. At step
1720, it is determined whether or not there are any misses. If so,
then the method proceeds to step 1725. Otherwise, the method
proceeds to step 1715. At step 1725, it is determined whether or
not the number of false positives is more than the number of pruned
blocks. If so, then the method proceeds to step 1630. Otherwise,
control proceeds to step 1735. At step 1730, the pruned block
sequence is used, and a flag is set equal to zero. At step 1740, a
differentiation is performed. At step 1745, lossless encoding is
performed. At step 1750, the encoded metadata is output. At step
1715, a threshold is adjusted. At step 1735, the false positive
sequence is used, and the flag is set equal to one.
[0113] Turning to FIG. 18, an exemplary method for encoding a patch
index is indicated generally by the reference numeral 1800. At step
1805, a decoded pruned video frame and a patch index for each block
are input. At step 1810, a loop is performed for each pruned block.
At step 1815, a signature is obtained. At step 1820, the distances
to the patches in the patch library are calculated. At step 1825,
the patches are sorted to obtain a rank list. At step 1830, the
rank number is obtained. At step 1835, the rank number is entropy
coded. At step 1840, it is determined whether or not all blocks are
finished (being processed). If so, then the method proceeds to step
1845. Otherwise, the method returns to step 1810. At step 1845, the
encoded patch index is output.
[0114] During the pruning process, for each block, the system would
search the best match patch in the patch library and output a patch
index in the patch library for a found patch if the distortion is
less than a threshold. Each patch is associated with its signature
(i.e., its color plus surrounding pixels in the decoded frames).
During the recovery process in the decoder side processing, the
color of the pruned block and its surrounding pixels are used as a
signature to find the correct high-resolution patch in the
library.
[0115] However, due to noise, the search process using the
signature is not reliable, and metadata is needed to assist the
recovery process to ensure reliability. Therefore, after the
pruning process, the system will proceed to generate metadata for
assisting recovery. For each pruned block, the search process
described above already identifies the corresponding patches in the
library. The metadata encoding component will simulate the recovery
process to encode the metadata. A new patch library will be created
using the decoded pruned frame to ensure the patch library is
exactly the same as that in the decoder side. The frame is divided
into overlapping patches and signatures are created for the
patches. During the recovery simulation process, the patch library
has to be dynamically updated because some pruned blocks will be
recovered during the process. The process is illustrated in FIG.
17. Thus, referring back to FIG. 18, for each pruned block, its
query signature (the average color of the pruned block plus the
surrounding pixels) will be used to match the signatures of the
patches in the library. For each block, the distances (e.g.,
Euclidean, although, of course, other distance metrics may be used)
between the query vector and the signatures of the patches in the
library are calculated. The patches are sorted according to the
distances, resulting in a rank list. In the ideal case, the
best-match high-resolution patch should be at the top of the rank
list. However, due to the noise caused by arithmetic rounding and
compression, the best-match patch is often not the first one in the
rank list. Presume that the correct patch is the n.sup.th patch in
the rank list. The rank number n will be saved as the metadata for
the block. It should be noted that, in the most cases, n is 1 or a
very small number because the best-match patch is close to the top
in the rank list, therefore the entropy of this random number is
significantly smaller than the index of the best-match patch in the
library, which should be a uniform distribution having maximum
entropy. Therefore, the rank number can be more efficiently encoded
by entropy coding. The rank numbers of all the pruned blocks form a
rank number sequence as part of the metadata sent to the decoder
side. It is observed from the actual experiments that the
distribution of the rank numbers is close to a geometric
distribution, therefore currently the Golomb code is used for
further encoding the rank number sequence, since Golomb code is
optimal for a random number having geometric distribution. Of
course, other types of codes may also be used in accordance with
the teachings of the present principles, while maintaining the
spirit of the present principles.
[0116] For decoding (see FIG. 19), the decoder side should have the
exactly the same patch library as the encoder, as the signature of
the pruned block will be used to match with the signatures in the
patch library and get a rank list (the sorted patch library). The
rank number is used to retrieve the correct patch from the sorted
patch library. If the patch library is created from previous
frames, in order to ensure the encoder and decoder side have
exactly the same patch library, the metadata encoding process at
the encoder side should also use the decoded frames from the video
decoder because only the decoded frames are available at the
decoder side.
[0117] Turning to FIG. 19, an exemplary method for decoding a patch
index is indicated generally by the reference numeral 1900. At step
1905, a decoded pruned video frame, an encoded patch index, and
pruned block IDs are input. At step 1910, a loop is performed for
each pruned block. At step 1915, a signature is obtained. At step
1920, the distances to the patches in the patch library are
calculated. At step 1925, the patches are sorted to obtain a rank
list. At step 1930, the encoded rank number is entropy decoded. At
step 1935, the patch index is retrieved from the patch library
using the rank number. At step 1940, it is determined whether or
not all blocks are finished (being processed). If so, then the
method proceeds to step 1945. Otherwise, the method returns to step
1910. At step 1945, the decoded patch index is output.
[0118] Besides the rank number metadata, it is necessary to send
the locations of the pruned blocks to the decoder side. This is
done by block ID encoding (see FIG. 17). One simple way is to just
send a block ID sequence to the decoder side. The ID of a block
indicates the coordinate of the block on the frame. Turning to FIG.
20, an exemplary block ID is indicated generally by the reference
numeral 2000. However, it is also possible to more efficiently
encode the ID sequence of the pruned blocks. Since the pruned
blocks are flat and do not include any high-frequency components,
it is possible to detect the pruned blocks by calculating the color
variation within the blocks. If the color variation is smaller than
a threshold, then the block is identified as a pruned block.
However, since such an identification process is not reliable due
to noise caused by compression, metadata is needed to facilitate
the identification process. First, the variance threshold is
determined by starting from a high threshold value. The algorithm
then slowly decrease the variance threshold such that all pruned
blocks can be identified by the identification procedure, but false
positive blocks may be present in the identified results.
Afterwards, if the number of the false positives is larger than
that of the pruned blocks, the IDs of the pruned blocks are saved
and sent to decoder, otherwise the IDs of the false positives would
be sent to the decoder side. The variance threshold for identifying
flat blocks is also sent to the decoder side for running the same
identification procedure. For the pruning method with causal
recovery as described above, the order of the block IDs does not
matter, so the ID sequence can be sorted so that the numbers are
increasing.
[0119] To further reduce redundancy, it is possible to have a
differential coding scheme to compute the difference between an ID
number to its previous ID number, and encode the difference
sequence. For example, assuming the ID sequence is 3, 4, 5, 8, 13,
14, the differentiated sequence becomes 3, 1, 1, 3, 5, 1. The
differentiation process makes the numbers closer to 1, therefore
resulting in a number distribution with smaller entropy. The
differentiated sequence then can be further encoded with entropy
coding (e.g., Golomb code in our current implementation). Thus, the
format of the final metadata is shown as follows:
##STR00001##
where flag is a signaling flag to indicate whether or not the block
ID sequence is a false positive ID sequence, the threshold is the
variance threshold for flat block identification, the encoded block
ID sequence is the encoded bit stream of the pruned block IDs or
the false positive block IDs, and the encoded rank number sequence
is the encoded bit stream of the rank numbers used for block
recovery.
Pruning Process for the Rest of the Frames in a GOP
[0120] For the rest of the frames in a GOP, some of the blocks in
the frames will be also replaced by flat blocks. The positions of
the pruned blocks in the first frame can be propagated to the rest
of the frames by motion tracking. Different strategies are tried to
propagate the positions of the pruned blocks. One approach is to
track the pruned blocks across frames by block matching, and prune
the corresponding blocks in the subsequent frames (i.e., replace
the tracked blocks with flat blocks). However, this approach does
not result in good compression efficiency gain because, in general,
the boundaries of the tracked blocks do not align with the coding
macro blocks. As a result, the boundaries of the tracked blocks
create a high frequency signal in the macro blocks. Therefore, a
simpler alternative approach is currently used to set all the block
positions for the subsequent frames to the same positions as the
first frame. Namely, all the pruned blocks in the subsequent frames
are collocated with the pruned blocks in the first frame. As a
result, all of the pruned blocks for the subsequent frames are
aligned with macro block positions.
[0121] However, this approach would not work well if there is
motion in the pruned blocks. Therefore, one solution to solve the
problem is to detect calculate the motion intensity of the block
(see FIG. 21). Turning to FIG. 21, an exemplary method for pruning
sequent frames is indicated generally by the reference numeral
2100. At step 2105, a video frame and pruned block IDs are input.
At step 2110, collocated blocks are pruned. At step 2115, a loop is
performed for each block. At step 2120, a motion vector is
calculated to the previous frame. At step 2125, the motion vectors
are saved as metadata. At step 2130, it is determined whether or
not all blocks are finished (being processed). If so, then the
method proceeds to step 2135. Otherwise, the method returns to step
2115.
[0122] If the motion intensity is larger than a threshold, the
block would not be pruned. Another more sophisticated solution,
which is our current implementation, is to calculate the motion
vectors of the pruned blocks in the original video by finding the
corresponding block in the previous frame (see FIG. 22). Turning to
FIG. 22, an exemplary motion vector for a pruned block is indicated
generally by the reference numeral 2200. The motion vector 2200
relates to a pruned block in an i-th frame and a co-located block
in a (i-1)-th frame. The motion vectors of the pruned blocks would
be sent to the decoder side for a recovery purpose. Since the
previous frame would already have been completely recovered, the
pruned blocks in the current frame can be recovered using the
motion vectors. To avoid artifacts, if the difference between the
block in the current frame and the corresponding patch block
calculated by motion estimation in the previous frame is too large,
then the block in the current frame would not be pruned.
Furthermore, subpixel motion estimation is currently used to make
motion vector based recovery more accurate. Our experiments show
that the resultant visual quality using subpixel based motion
vector estimation is much better than that using regular pixel
based motion vector estimation.
Recovery Process
[0123] The recovery process takes place at the decoder side. The
patch library is created before the recovery process by obtaining
all the overlapping patches and creating the signatures using the
first decoded frame in the GOP. However, different from the
aforementioned first approach, the patch library has to be
dynamically updated during the recovery process, because the pruned
blocks in the frame will be replaced with the recovered blocks
during the recovery process.
[0124] For the first frame in a GOP, the recovery process starts
with decoding the metadata (see FIG. 23), including decoding the
block ID sequence (see FIG. 24) and the rank order sequence (see
FIG. 23). Turning to FIG. 23, an exemplary method for decoding
metadata is indicated generally by the reference numeral 2300. At
step 2305, encoded metadata is input. At step 2310, pruned block
IDS are decoded. At step 2315, a patch index is decoded. At step
2320, decoded metadata is output.
[0125] Turning to FIG. 24, an exemplary method for decoding pruned
block IDs is indicated generally by the reference numeral 2400. At
step 2405, encoded metadata is input. At step 2410, lossless
decoding is performed. At step 2415, reverse differentiation is
performed. At step 2420, it is determined whether or not a flag is
equal to zero. If so, then the method proceeds to step 2425.
Otherwise, the method proceeds to step 2430. At step 2425, block
IDs are output. At step 2430, a low resolution block identification
is performed. At step 2435, false positives are removed. At step
2440, block IDs are output.
[0126] After the block ID sequence is available, for each pruned
block, the average color and surrounding pixels of this block will
be taken as the signature to match with the signatures in the patch
library. However, if the neighboring blocks of the block for
recovery are also pruned, then the set of surrounding pixels used
as the signature for search only includes the pixels from the
non-pruned blocks. If all the neighboring blocks are pruned, then
only the average color is used as the signature. The matching
process is realized by calculating the Euclidean distances between
the signature of the query block and those of the patches in the
library. After all the distances are calculated, the list is sorted
according to the distances, resulting in a rank list. The rank
number corresponding to the pruned block then is used to retrieve
the correct high-resolution block from the rank list.
[0127] Turning to FIG. 25, an exemplary apparatus for performing
decoder side processing for example-based data pruning using
intra-frame patch similarity is indicated generally by the
reference numeral 2500. The apparatus 2500 includes a divider 2505
having an output connected in signal communication with a first
input of a search patch library and replacement block device 2510.
An output of a metadata decoder 2515 is connected in signal
communication with a second input of the search patch library and
replacement block device 2510. An input of the divider 2505 is
available as an input of the apparatus 2500, for receiving pruned
video. An input of the metadata decoder 2515 is available as an
input of the apparatus 2500, for receiving encoded metadata. An
output of the search patch library and replacement block device
2510 is available as an output of the apparatus, for outputting
recovered video.
[0128] Turning to FIG. 26, an exemplary method for recovering a
pruned frame is indicated generally by the reference numeral 2600.
At step 2605, a pruned frame and corresponding metadata are input.
At step 2610, the pruned frame is divided into non-overlapping
blocks. At step 2615, a loop is performed for each block. At step
2620, it is determined whether or not the current block is a pruned
block. If so, then the method proceeds to step 2625. Otherwise, the
method returns to step 2615. At step 2625, a patch is found in the
library. At step 2630, a current block is replaced with the found
patch. At step 2635, it is determined whether or not all blocks are
finished (being processed). If so, then the method proceeds to step
2640. Otherwise, the method returns to step 2615. At step 2640, the
recovered frame is output.
[0129] It is to be appreciated that the block recovery using
example patches can be replaced by traditional inpainting and
texture synthesis based methods.
[0130] Note that for the pruning scheme with dependency graph as
described above, the recovery process has to follow the order of
the block IDs in the ID sequence so that whenever a block is being
recovered, its corresponding patch is available in the patch
library. Furthermore, after each block is recovered, the patch
library has to be updated, i.e., the patches overlapping with the
block have to be replaced with new patches and the signatures for
those patches and their neighbors have to be recalculated.
[0131] For the rest of the frames in a GOP, for each pruned block,
if the motion vector is not available, the content of the block can
be copied from the co-located block in the previous frame. If the
motion vector is available, then it is possible to use the motion
vector to find the corresponding block in the previous frame, and
copy the corresponding block to fill the pruned block. Turning to
FIG. 27, an exemplary method for recovering subsequent frames is
indicated generally by the reference numeral 2700. At step 2705, a
video frame and pruned block IDs are input. At step 2710, a loop is
performed for each block. At step 2715, a motion vector is used to
find the patch in the previous frame. At step 2720, the found patch
is used to replace the pruned block. At step 2725, it is determined
whether or not all blocks are finished (being processed). If so,
then the method proceeds to step 2730. Otherwise, the method
returns to step 2710.
[0132] Block artifacts may be visible since the recovery process is
block-based. A deblocking filter, such as the in-loop deblocking
filter used in the MPEG-4 AVC Standard encoder, can be applied to
reduce the block artifacts.
[0133] These and other features and advantages of the present
principles may be readily ascertained by one of ordinary skill in
the pertinent art based on the teachings herein. It is to be
understood that the teachings of the present principles may be
implemented in various forms of hardware, software, firmware,
special purpose processors, or combinations thereof.
[0134] Most preferably, the teachings of the present principles are
implemented as a combination of hardware and software. Moreover,
the software may be implemented as an application program tangibly
embodied on a program storage unit. The application program may be
uploaded to, and executed by, a machine comprising any suitable
architecture. Preferably, the machine is implemented on a computer
platform having hardware such as one or more central processing
units ("CPU"), a random access memory ("RAM"), and input/output
("I/O") interfaces. The computer platform may also include an
operating system and microinstruction code. The various processes
and functions described herein may be either part of the
microinstruction code or part of the application program, or any
combination thereof, which may be executed by a CPU. In addition,
various other peripheral units may be connected to the computer
platform such as an additional data storage unit and a printing
unit.
[0135] It is to be further understood that, because some of the
constituent system components and methods depicted in the
accompanying drawings are preferably implemented in software, the
actual connections between the system components or the process
function blocks may differ depending upon the manner in which the
present principles are programmed. Given the teachings herein, one
of ordinary skill in the pertinent art will be able to contemplate
these and similar implementations or configurations of the present
principles.
[0136] Although the illustrative embodiments have been described
herein with reference to the accompanying drawings, it is to be
understood that the present principles is not limited to those
precise embodiments, and that various changes and modifications may
be effected therein by one of ordinary skill in the pertinent art
without departing from the scope or spirit of the present
principles. All such changes and modifications are intended to be
included within the scope of the present principles as set forth in
the appended claims.
* * * * *