U.S. patent application number 14/517787 was filed with the patent office on 2018-03-15 for method of sub-prediction unit prediction in 3d video coding.
The applicant listed for this patent is HFI INNOVATION INC.. Invention is credited to Jicheng An, Yi-Wen Chen, Jian-Liang Lin, Kai Zhang.
Application Number | 20180077427 14/517787 |
Document ID | / |
Family ID | 53003130 |
Filed Date | 2018-03-15 |
United States Patent
Application |
20180077427 |
Kind Code |
A9 |
An; Jicheng ; et
al. |
March 15, 2018 |
Method of Sub-Prediction Unit Prediction in 3D Video Coding
Abstract
A method for a three-dimensional encoding or decoding system
incorporating restricted sub-PU level prediction is disclosed. In
one embodiment, the sub-PU level prediction associated with
inter-view motion prediction or view synthesis prediction is
restricted to the uni-prediction. In another embodiment, the sub-PU
partition associated with inter-view motion prediction or view
synthesis prediction is disabled if the sub-PU partition would
result in sub-PU size smaller than the minimum PU split size or the
PU belongs to a restricted partition group. The minimum PU split
size may correspond to 8.times.8. The restricted partition group
may correspond to one or more asymmetric motion partition (AMP)
modes.
Inventors: |
An; Jicheng; (Beijing,
CN) ; Zhang; Kai; (Beijing, CN) ; Chen;
Yi-Wen; (Taichung, TW) ; Lin; Jian-Liang;
(Yilan County, TW) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
HFI INNOVATION INC. |
Zhubei City |
|
TW |
|
|
Prior
Publication: |
|
Document Identifier |
Publication Date |
|
US 20160112721 A1 |
April 21, 2016 |
|
|
Family ID: |
53003130 |
Appl. No.: |
14/517787 |
Filed: |
October 17, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
PCT/CN2013/086271 |
Oct 31, 2013 |
|
|
|
14517787 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04N 19/105 20141101;
H04N 19/157 20141101; H04N 19/513 20141101; H04N 13/128 20180501;
H04N 19/51 20141101; H04N 13/161 20180501; H04N 19/597
20141101 |
International
Class: |
H04N 19/597 20060101
H04N019/597; H04N 13/00 20060101 H04N013/00; H04N 19/51 20060101
H04N019/51 |
Claims
1. A method for three-dimensional or multi-view video encoding or
decoding, the method comprising: receiving input data associated
with a current texture PU (prediction unit) in a dependent view;
splitting the current texture PU into sub-PUs; locating depth
sub-blocks or texture sub-blocks in a reference view corresponding
to the current texture PU using derived DVs (disparity vectors);
and generating temporal prediction for the current texture PU using
motion information of the texture sub-blocks in the reference view
or generating inter-view prediction based on warped texture samples
in the reference view using the depth sub-blocks for a
three-dimensional coding tool; and when the temporal prediction or
the inter-view prediction generated is bi-prediction: encoding or
decoding the current texture PU using only the temporal prediction
or the inter-view prediction in List0.
2. The method of claim 1, wherein the current texture PU is encoded
or decoded using only the temporal prediction or the inter-view
prediction in List0 when the sub-PUs from splitting the current
texture PU are smaller than a minimum PU split size.
3. The method of claim 2, wherein the minimum PU split size
correspond to 8.times.8.
4. The method of claim 1, wherein when current PU width or current
PU height is X times of minimum PU split width or minimum PU split
height and X is greater than or equal to 1, the current texture PU
is divided horizontally or vertically into ceiling(X) sub-PU widths
or sub-PU heights, wherein first ceiling(X)-1 sub-PU columns or
rows having the minimum PU split width or the minimum PU split
height respectively and a last sub-PU column or row having width or
height equal to the current PU width or current PU height minus the
ceiling(X) times of the minimum PU split width or the minimum PU
split height respectively, and wherein ceiling (X) corresponds to a
smallest integer not less than X.
5. The method of claim 1, wherein the three-dimensional coding tool
is view synthesized prediction (VSP).
6. A method for three-dimensional or multi-view video encoding or
decoding, the method comprising: receiving input data associated
with a current texture PU (prediction unit) in a dependent view; if
sub-PUs from splitting the current texture PU are not smaller than
a minimum PU split size or the current texture PU does not belong
to a restricted partition group: splitting the current texture PU
into said sub-PUs; locating depth sub-blocks or texture sub-blocks
in a reference view corresponding to the current texture PU using
first derived DVs (disparity vectors); and generating temporal
prediction for the current texture PU using motion information of
the texture sub-blocks in the reference view or generating
inter-view prediction based on warped texture samples in the
reference view using the depth sub-blocks for a three-dimensional
coding tool; and if the sub-PUs from splitting the current texture
PU are smaller than the minimum PU split size or the current
texture PU belongs to the restricted partition group: locating a
depth block or a texture block in the reference view corresponding
to the current texture PU using a second derived DV; and generating
the temporal prediction for the current texture PU using the motion
information of the texture block in the reference view or
generating the inter-view prediction based on the warped texture
samples in the reference view using the depth block; and encoding
or decoding the current texture PU using the temporal prediction or
the inter-view prediction.
7. The method of claim 6, wherein the minimum PU split size
correspond to 8.times.8.
8. The method of claim 6, wherein the restricted partition group
contains one or more asymmetric motion partition (AMP) modes
selected from PART_2NxnU, PART_2NxnD, PART_nLx2N, and
PART_nRx2N.
9. The method of claim 6, wherein the three-dimensional coding tool
is sub-PU level inter-view motion prediction (SPIVMP).
10. An apparatus for three-dimensional or multi-view video coding
system, the apparatus comprising one or more electronic circuits
configured to: receive input data associated with a current texture
PU (prediction unit) in a dependent view; split the current texture
PU into sub-PUs; locate depth sub-blocks or texture sub-blocks in a
reference view corresponding to the current texture PU using
derived DVs (disparity vectors); and generate temporal prediction
for the current texture PU using motion information of the texture
sub-blocks in the reference view or generating inter-view
prediction based on warped texture samples in the reference view
using the depth sub-blocks for a three-dimensional coding tool; and
when the temporal prediction or the inter-view prediction generated
is bi-prediction: encode or decode the current texture PU using
only the temporal prediction or the inter-view prediction in
List0.
11. The apparatus of claim 10, wherein the current texture PU is
encoded or decoded using only the temporal prediction or the
inter-view prediction in List0 when the sub-PUs from splitting the
current texture PU are smaller than a minimum PU split size.
12. The apparatus of claim 11, wherein the minimum PU split size
correspond to 8.times.8.
13. The apparatus of claim 10, wherein when current PU width or
current PU height is X times of minimum PU split width or minimum
PU split height and X is greater than or equal to 1, the current
texture PU is divided horizontally or vertically into ceiling(X)
sub-PU widths or sub-PU heights, wherein first ceiling(X)-1 sub-PU
columns or rows having the minimum PU split width or the minimum PU
split height respectively and a last sub-PU column or row having
width or height equal to the current PU width or current PU height
minus the ceiling(X) times of the minimum PU split width or the
minimum PU split height respectively, and wherein ceiling (X)
corresponds to a smallest integer not less than X.
14. The apparatus of claim 10, wherein the three-dimensional coding
tool is view synthesized prediction (VSP).
15. An apparatus for three-dimensional or multi-view video coding
system, the apparatus comprising one or more electronic circuits
configured to: receive input data associated with a current texture
PU (prediction unit) in a dependent view; if sub-PUs from splitting
the current texture PU are not smaller than a minimum PU split size
or the current texture PU does not belong to a restricted partition
group: split the current texture PU into said sub-PUs; locateg
depth sub-blocks or texture sub-blocks in a reference view
corresponding to the current texture PU using first derived DVs
(disparity vectors); and generate temporal prediction for the
current texture PU using motion information of the texture
sub-blocks in the reference view or generating inter-view
prediction based on warped texture samples in the reference view
using the depth sub-blocks for a three-dimensional coding tool; and
if the sub-PUs from splitting the current texture PU are smaller
than the minimum PU split size or the current texture PU belongs to
the restricted partition group: locate a depth block or a texture
block in the reference view corresponding to the current texture PU
using a second derived DV; and generate the temporal prediction for
the current texture PU using the motion information of the texture
block in the reference view or generating the inter-view prediction
based on the warped texture samples in the reference view using the
depth block; and encode or decode the current texture PU using the
temporal prediction or the inter-view prediction.
16. The apparatus of claim 15, wherein the minimum PU split size
correspond to 8.times.8.
17. The apparatus of claim 15, wherein the restricted partition
group contains one or more asymmetric motion partition (AMP) modes
selected from PART_2NxnU, PART_2NxnD, PART_nLx2N, and
PART_nRx2N.
18. The apparatus of claim 15, wherein the three-dimensional coding
tool is sub-PU level inter-view motion prediction (SPIVMP).
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present invention claims priority to PCT Patent
Application, Serial No. PCT/CN2013/086271, filed on Oct. 31, 2013,
entitled "Methods for Sub-PU Level Prediction". The PCT Patent
Application is hereby incorporated by reference in its
entirety.
FIELD OF THE INVENTION
[0002] The present invention relates to three-dimensional video
coding. In particular, the present invention relates to sub-PU
(prediction unit) based prediction associated with inter-view
motion prediction (IVMP) and view synthesis prediction (VSP) for
texture coding in a three-dimensional (3D) coding system.
BACKGROUND AND RELATED ART
[0003] Three-dimensional (3D) television has been a technology
trend in recent years that intends to bring viewers sensational
viewing experience. Various technologies have been developed to
enable 3D viewing and the multi-view video is a key technology for
3DTV application among others. To exploit the inter-view
redundancy, 3D coding tools such as sub-PU level inter-view motion
prediction (SPIVMP) and view synthesized prediction (VSP) have been
integrated to conventional 3D-HEVC (High Efficiency Video Coding)
or 3D-AVC (Advanced Video Coding) codec.
[0004] The SPIVMP processing in the current 3DV-HTM
(three-dimensional video coding based on High Efficiency Video
Coding (HEVC) Test Model) is illustrated in FIG. 1. A current
processed PU (122) in a current dependent view (i.e., V1 120) is
divided into multiple sub-PUs, (i.e., A, B, C and D) where each
sub-PU has a smaller size. The disparity vector (DV) associated
with each sub-PU is added to the respective position (as
illustrated by a dot in the center of each sub-PU) to locate a
respective prediction block (i.e., A', B', C' or D') in the
reference view (i.e., V0 110), where the prediction blocks (i.e.,
A', B', C' and D') in the reference view are already coded. The
prediction blocks cover the sample positions are used as reference
blocks. The DV used to derive a reference block in a reference view
for each sub-PU can be a derived DV and the derived DV can be
different for each sub-PU or all sub-PUs can share a unified
derived DV.
[0005] For each reference block, if it is coded using motion
compensated prediction (MCP), the associated motion parameters
(i.e., MV.sub.A', MV.sub.B', MV.sub.C', and MV.sub.D' associated
with reference view V0) can be used as temporal inter-view motion
vector candidate (TIVMC) for the corresponding sub-PU in the
current PU in the current view. Otherwise, the corresponding sub-PU
can share the candidate motion parameters with its spatial
neighbors. The TIVMC of the current PU is composed of the TIVMC of
all the sub-PUs. The sub-PU size can be 4.times.4, 8.times.8,
16.times.16, etc, which can be indicated by a flag in video
parameter set (VPS).
[0006] An example of VSP processing is shown in FIG. 2. For a
texture PU (222) in a current dependent view (T1 220), the DV (240)
associated with a neighboring block (224) can be used to located a
corresponding depth block (234) of the depth picture in the
reference view (D0 230). The DV (240) is used to locate a
co-located block (232) of the depth picture in the reference view
(D0 230). The samples in the current texture PU (222) are warped
into corresponding samples in the reference texture view (T0 210).
The mappings (250a-c) illustrate an example of mapping samples in
the current texture PU to samples in the reference texture picture
according to DVs converted from corresponding depth values. The VSP
method in current 3D-HEVC also uses the sub-PU level prediction (as
indicated by dashed lines in block 232 and block 222) and the
sub-PU size can be 8.times.4 or 4.times.8. Accordingly, the texture
PU (222) may be divided into sub-PUs and each sub-PU is mapped to a
corresponding depth sub-block.
[0007] In current 3D-HEVC, the sub-PU level inter-view motion
prediction (IVMP) and the sub-PU view synthesis prediction (VSP)
are allowed for small PUs to be further split. For example, a PU
can be further split into 8.times.4 or 4.times.8 sub-PUs for SPIVMP
or VSP. The small sub-PUs will cause increase system memory
bandwidth. In order to reduce the memory bandwidth in motion
compensation, the bi-prediction for small PU sizes (8.times.4,
4.times.8) has been disabled in the conventional HEVC. It is
desirable to develop modified existing 3D-HEVC coding tools such as
SPIVMP and view synthesized prediction (VSP) to relief the high
memory bandwidth requirement while retaining the performance.
BRIEF SUMMARY OF THE INVENTION
[0008] A method for a three-dimensional encoding or decoding system
incorporating restricted sub-PU level prediction is disclosed.
First embodiments according to the present invention split a
current texture PU into sub-PUs, locate depth sub-blocks or texture
sub-blocks in a reference view corresponding to the current texture
PU using first derived DVs (disparity vectors) and generate
temporal prediction for the current texture PU using motion
information of the texture sub-blocks in the reference view or
generate inter-view prediction based on warped texture samples in
the reference view using the depth sub-blocks. When the generated
temporal prediction or the inter-view prediction for a
three-dimensional coding tool is bi-prediction, the current texture
PU is encoded or decoded using only the temporal prediction or the
inter-view prediction in List0.
[0009] An embodiment of applying the three-dimensional coding tool
disables bi-prediction when the sub-PUs are smaller than a minimum
PU split size. The minimum PU split size may correspond to
8.times.8. When current PU width or current PU height is X times of
the minimum PU split width or the minimum PU split height and X is
greater than or equal to 1, the current texture PU is divided
horizontally or vertically into ceiling(X) sub-PU widths or sub-PU
heights respectively. The first ceiling(N)-1 sub-PU columns or rows
have the minimum PU split width or the minimum PU split height
respectively. The last sub-PU column or row have width or height
equal to the current PU width or current PU height minus the
ceiling(X) times of the minimum PU split width or the minimum PU
split height respectively. Ceiling(X) corresponds to a smallest
integer not less than X.
[0010] Second embodiments according to the present invention
perform the following steps when the sub-PUs from splitting the
current texture PU are not smaller than a minimum PU split size or
the current texture PU does not belong to a restricted partition
group: splitting the current texture PU into said sub-PUs; locating
depth sub-blocks or texture sub-blocks in a reference view
corresponding to the current texture PU using first derived DVs
(disparity vectors); and generating temporal prediction for the
current texture PU using motion information of the texture
sub-blocks in the reference view or generating inter-view
prediction based on warped texture samples in the reference view
using the depth sub-blocks according to a three-dimensional coding
tool. When the sub-PUs from splitting the current texture PU are
smaller than the minimum PU split size or the current texture PU
belongs to the restricted partition group, the second embodiments
perform the following steps: locating a depth block or a texture
block in the reference view corresponding to the current texture PU
using a second derived DV; and generating the temporal prediction
for the current texture PU using the motion information of the
texture block in the reference view or generating the inter-view
prediction based on the warped texture samples in the reference
view using the depth block. After the temporal prediction or the
inter-view prediction is generated for both cases, the current
texture PU is encoded or decoded using the temporal prediction or
the inter-view prediction.
[0011] The restricted partition group may contain one or more
asymmetric motion partition (AMP) mode selected from PART_2NxnU,
PART_2NxnD, PART_nLx2N, and PART_nRx2N.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] FIG. 1 illustrates an example of sub-PU (prediction unit)
level prediction associated with inter-view motion prediction
(IVMP) for three-dimensional video coding.
[0013] FIG. 2 illustrates an example of view synthesis prediction
(VSP), where a depth block in the reference view corresponding to a
PU (prediction nit) in a dependent view is used to warp
corresponding texture data in the reference view according to the
depth block to form VSP.
[0014] FIG. 3 illustrates an example of restricted sub-PU
(prediction unit) level prediction according to an embodiment of
the present invention, where the inter-view motion prediction or
view synthesis prediction is restricted to uni-prediction when the
sub-PU is smaller than minimum PU split size.
[0015] FIG. 4 illustrates an exemplary flowchart of restricted
sub-PU (prediction unit) level prediction according to an
embodiment of the present invention, where bi-prediction is
disabled if the sub-PU is smaller than minimum PU split size.
[0016] FIG. 5 illustrates an exemplary flowchart of restricted
sub-PU (prediction unit) level prediction according to another
embodiment of the present invention, where sub-PU is disabled if
the sub-PU is smaller than minimum PU split size or the PU belongs
to a restricted partition group.
DETAILED DESCRIPTION OF THE INVENTION
[0017] As mentioned above, the use of sub-PU for small block sizes
may cause system bandwidth issue. It is desirable to develop a
method to relieve the high memory bandwidth requirement while
retaining the system performance. Accordingly, the present
invention conditionally imposes restrictions on the sub-PU sizes
for certain 3D coding tools. In one embodiment, the bi-prediction
mode is disabled when the sub-PU size is smaller than a specified
size, such as 8.times.8. One example of restricted sub-PU according
to the present invention is shown in FIG. 3. The sub-PU size is
8.times.4 which is smaller than a specific size (8.times.8 in this
example). In a conventional system, bi-prediction is allowed for
this sub-PU size. Therefore, the conventional approach will allow
the sub-PU to use both motion vectors (MVs) in list 0 (List0) and
list 1 (List1). However, an embodiment according to the present
invention will restrict the prediction in a 3D coding tool to
uni-prediction by using only one list such as List0. In other
words, prediction in List1 is disabled. Alternatively, the
uni-prediction can only use List1 by disabling List0 for motion
prediction in the 3D coding tool. The 3D coding tool may correspond
to inter-view motion prediction or view synthesis prediction.
[0018] In another embodiment, the sub-PU size is restricted not to
be smaller than a specified size. For example, the specified
minimal size can be set to 8.times.8. In this case, if a PU size is
16.times.12, the PU will not be split in the vertical direction
since the height (i.e., 12) is less than twice of the specified
height (i.e., 8). Therefore, the PU can only be divided into two
8.times.12 sub-PUs. For a 16.times.4 PU, the height (i.e., 4) is
already smaller than the specified height (8), so that the PU will
not be divided.
[0019] In yet another embodiment, whether the sub-PU level
prediction is used or not depends on the size or the partitioning
type of the current PU. For example, if the size of the current PU
(width or height) is smaller than a specified value (e.g., 8), the
sub-PU level prediction is not allowed for the current PU. In
another example, if the current PU uses asymmetric motion partition
(AMP) mode (e.g. PartMode equal to PART_2NxnU, PART_2NxnD,
PART_nLx2N, or PART_nRx2N), the sub-PU level prediction is not
allowed for the current PU. The sub-PU level prediction may
correspond to sub-PU level inter-view motion prediction (SPIVMP) or
view synthesis prediction (VSP).
[0020] As mentioned earlier, the present invention is intended to
relieve the bandwidth requirement due to small sub-PU sizes for
inter-view coding tools such as sub-PU level inter-view motion
prediction (SPIVMP) and view synthesized prediction (VSP). The
performance of a 3D video coding system incorporating restricted
sub-PU is compared with the performance of a conventional system
based on HTM-9.0 as shown in Table 1, where the sub-PU
bi-prediction is disabled when the PU size is smaller than
8.times.8. The performance comparison is based on different sets of
test data listed in the first column. The BD-rate differences are
shown for texture pictures in view 1 (video 1) and view 2 (video
2). A negative value in the BD-rate implies that the present
invention has a better performance. As shown in Table 1, BD-rate
measure for view 1 and view 2 is about the same as the conventional
HTM-9.0. The BD-rate measure for the coded video PSNR with video
bitrate, the coded video PSNR with total bitrate (texture bitrate
and depth bitrate), and the synthesized video PSNR with total
bitrate are all about the same as the conventional HTM-9.0. The
processing times (encoding time, decoding time and rendering time)
are also compared. As shown in Table 1, slight increases in
encoding time and rendering time (1.4 and 2.5%) and slight decrease
in decoding time (1.1% in average) are noted. Accordingly, the
system that uses restricted sub-PU level prediction for SPIVMP and
VSP according to one embodiment of the present invention achieves
about the same performance as the conventional HTM-9.0. In other
words, there is no performance loss due to the present
invention.
TABLE-US-00001 TABLE 1 Video Video Synth PSNR/video PSNR/total
PSNR/total Enc Dec Ren Video 1 Video 2 bitrate bitrate bitrate time
time time Balloons -0.06% -0.02% -0.01% -0.02% -0.09% 100.5% 94.8%
102.1% Kendo -0.07% -0.08% -0.03% -0.02% -0.06% 101.3% 110.5%
108.2% Newspapercc 0.11% 0.04% 0.02% 0.03% -0.01% 100.9% 100.5%
106.2% GhostTownFly 0.04% -0.09% 0.00% 0.00% -0.01% 102.8% 97.3%
101.4% PoznanHall2 0.03% 0.18% 0.04% 0.02% 0.04% 101.6% 95.0% 95.7%
PoznanStreet -0.04% -0.08% -0.01% -0.01% -0.02% 100.3% 97.1% 97.7%
UndoDancer 0.04% -0.03% 0.00% -0.01% -0.03% 101.9% 93.4% 104.1%
Shark -0.01% -0.05% -0.01% 0.00% 0.00% 101.9% 103.0% 105.0% 1024
.times. 768 -0.01% -0.02% 0.00% 0.00% -0.06% 100.9% 101.9% 105.5%
1920 .times. 1088 0.01% -0.01% 0.00% 0.00% 0.00% 101.7% 97.2%
100.8% average 0.00% -0.02% 0.00% 0.00% -0.02% 101.4% 98.9%
102.5%
[0021] FIG. 4 illustrates an exemplary flowchart of a
three-dimensional encoding or decoding system incorporating
restricted sub-PU level prediction according to an embodiment of
the present invention. The system receives input data associated
with a current texture PU (prediction unit) in a dependent view in
step 410. For encoding, the input data corresponds to texture PU
data and any associated data (e.g., motion information) to be
encoded. For decoding, the input data corresponds to coded texture
PU and any associated information to be decoded. The input data may
be retrieved from memory (e.g., computer memory, buffer (RAM or
DRAM) or other media) or from a processor. The current texture PU
is split into sub-PUs as shown in step 420. The depth sub-blocks or
texture sub-blocks in a reference view corresponding to the current
texture PU are located by using derived DVs (disparity vectors) as
shown in step 430. Temporal prediction for the current texture PU
is generated using motion information of the texture sub-blocks in
the reference view or the inter-view prediction is generated based
on warped texture samples in the reference view using the depth
sub-blocks for a three-dimensional coding tool as shown in step
440. When the temporal prediction or the inter-view prediction
generated is bi-prediction, the current texture PU is encoded or
decoded using only the temporal prediction or the inter-view
prediction in List0 as shown in step 450.
[0022] FIG. 5 illustrates an exemplary flowchart of a
three-dimensional encoding or decoding system incorporating
restricted sub-PU partition according to another embodiment of the
present invention. The system receives input data associated with a
current texture PU (prediction unit) in a dependent view in step
510. When sub-PUs from splitting the current texture PU are greater
than or equal to a minimum PU split size or the current texture PU
does not belong to a restricted partition group (step 520a), then
steps 530a to 550a are performed. In step 530a, the current texture
PU is split into said sub-PUs. In step 540a, depth sub-blocks or
texture sub-blocks in a reference view corresponding to the current
texture PU is located by using first derived DVs (disparity
vectors). In step 550a, temporal prediction for the current texture
PU is generated using motion information of the texture sub-blocks
in the reference view or inter-view prediction is generated based
on warped texture samples in the reference view using the depth
sub-blocks for a three-dimensional coding tool. When the sub-PUs
from splitting the current texture PU are smaller than the minimum
PU split size or the current texture PU belongs to the restricted
partition group (step 520b), then steps 530b to 540b are performed.
In step 530b, a depth block or a texture block in the reference
view corresponding to the current texture PU is located using a
second derived DV. In step 540b, the temporal prediction for the
current texture PU is generated using the motion information of the
texture block in the reference view or the inter-view prediction is
generated based on warped texture samples in the reference view
using the depth block. After the temporal prediction or the
inter-view prediction is generated for the current texture PU, the
current texture PU is encoded or decoded using the temporal
prediction or the inter-view prediction as shown in step 560.
[0023] The flowcharts shown above are intended to illustrate
examples 3D or multi-view coding with restricted sub-PU partition
according to the present invention. A person skilled in the art may
modify each step, re-arranges the steps, split a step, or combine
steps to practice the present invention without departing from the
spirit of the present invention.
[0024] The above description is presented to enable a person of
ordinary skill in the art to practice the present invention as
provided in the context of a particular application and its
requirement. Various modifications to the described embodiments
will be apparent to those with skill in the art, and the general
principles defined herein may be applied to other embodiments.
Therefore, the present invention is not intended to be limited to
the particular embodiments shown and described, but is to be
accorded the widest scope consistent with the principles and novel
features herein disclosed. In the above detailed description,
various specific details are illustrated in order to provide a
thorough understanding of the present invention. Nevertheless, it
will be understood by those skilled in the art that the present
invention may be practiced.
[0025] Embodiment of the present invention as described above may
be implemented in various hardware, software codes, or a
combination of both. For example, an embodiment of the present
invention can be a circuit integrated into a video compression chip
or program code integrated into video compression software to
perform the processing described herein. An embodiment of the
present invention may also be program code to be executed on a
Digital Signal Processor (DSP) to perform the processing described
herein. The invention may also involve a number of functions to be
performed by a computer processor, a digital signal processor, a
microprocessor, or field programmable gate array (FPGA). These
processors can be configured to perform particular tasks according
to the invention, by executing machine-readable software code or
firmware code that defines the particular methods embodied by the
invention. The software code or firmware code may be developed in
different programming languages and different formats or styles.
The software code may also be compiled for different target
platforms. However, different code formats, styles and languages of
software codes and other means of configuring code to perform the
tasks in accordance with the invention will not depart from the
spirit and scope of the invention.
[0026] The invention may be embodied in other specific forms
without departing from its spirit or essential characteristics. The
described examples are to be considered in all respects only as
illustrative and not restrictive. The scope of the invention is
therefore, indicated by the appended claims rather than by the
foregoing description. All changes which come within the meaning
and range of equivalency of the claims are to be embraced within
their scope.
* * * * *