U.S. patent application number 17/154485 was filed with the patent office on 2021-05-13 for difference calculation based on partial position.
The applicant listed for this patent is Beijing Bytedance Network Technology Co., Ltd., Bytedance Inc.. Invention is credited to Hongbin LIU, Yue WANG, Jizheng XU, Kai ZHANG, Li ZHANG.
Application Number | 20210144400 17/154485 |
Document ID | / |
Family ID | 1000005399002 |
Filed Date | 2021-05-13 |
![](/patent/app/20210144400/US20210144400A1-20210513\US20210144400A1-2021051)
United States Patent
Application |
20210144400 |
Kind Code |
A1 |
LIU; Hongbin ; et
al. |
May 13, 2021 |
DIFFERENCE CALCULATION BASED ON PARTIAL POSITION
Abstract
Difference calculation based on partial position is described.
In a representative aspect, a method of video processing
comprising: calculating, during a conversion between a current
block of video and a bitstream representation of the current block,
differences between two reference blocks associated with the
current block or differences between two reference sub-blocks
associated with a sub-block within the current block based on
representative positions of the reference blocks or representative
positions of the reference sub-blocks; and performing the
conversion based on the differences.
Inventors: |
LIU; Hongbin; (Beijing,
CN) ; ZHANG; Li; (San Diego, CA) ; ZHANG;
Kai; (San Diego, CA) ; XU; Jizheng; (San
Diego, CA) ; WANG; Yue; (Beijing, CN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Beijing Bytedance Network Technology Co., Ltd.
Bytedance Inc. |
Beijing
Los Angeles |
CA |
CN
US |
|
|
Family ID: |
1000005399002 |
Appl. No.: |
17/154485 |
Filed: |
January 21, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
PCT/CN2019/119634 |
Nov 20, 2019 |
|
|
|
17154485 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04N 19/577 20141101;
H04N 19/105 20141101; H04N 19/159 20141101; H04N 19/176 20141101;
H04N 19/119 20141101; H04N 19/52 20141101; H04N 19/184 20141101;
H04N 19/593 20141101 |
International
Class: |
H04N 19/52 20060101
H04N019/52; H04N 19/176 20060101 H04N019/176; H04N 19/184 20060101
H04N019/184; H04N 19/159 20060101 H04N019/159; H04N 19/105 20060101
H04N019/105; H04N 19/593 20060101 H04N019/593; H04N 19/119 20060101
H04N019/119; H04N 19/577 20060101 H04N019/577 |
Foreign Application Data
Date |
Code |
Application Number |
Nov 20, 2018 |
CN |
PCT/CN2018/116371 |
Jan 2, 2019 |
CN |
PCT/CN2019/070062 |
Jan 16, 2019 |
CN |
PCT/CN2019/072060 |
Claims
1. A method of video processing, comprising: determining an
enablement of a first tool, for a conversion between a current
block and a bitstream based on a rule, wherein the rule indicates
that the first tool and a second tool are not both used by the
current block; and performing, based on the determining, a
conversion between the current block and the bitstream; wherein one
of the first tool and the second tool is a decoder side motion
vector derivation tool to derive a refinement of motion information
related to the current block, and the other of the first tool and
the second tool comprises: a combined inter and intra prediction
tool in which a prediction signal of a block is generated at least
based on an intra prediction signal and an inter prediction
signal.
2. The method of claim 1, wherein the decoder side motion vector
derivation tool is disabled upon a determination that the combined
inter and intra prediction tool is used for the current block.
3. The method of claim 1, wherein the other of the first tool and
the second tool further comprises: a merge mode with motion vector
differences which comprises motion vector expression comprising a
first parameter representing a motion vector difference and a
second parameter indicating a base candidate, wherein the first
parameter comprises a motion magnitude and a motion direction.
4. The method of claim 3, wherein the decoder side motion vector
derivation tool is disabled upon a determination that the current
block is coded with the merge mode with motion vector
differences.
5. The method of claim 1, wherein the other of the first tool and
the second tool further comprises: a geometric partitioning tool
comprising a partition scheme which divides a block in to two
partitions, at least one of which is non-square and
non-rectangular.
6. The method of claim 5, wherein the geometric partitioning tool
is a triangular prediction tool.
7. The method of claim 1, wherein the decoder side motion vector
derivation tool comprises a decoder side motion vector refinement
tool.
8. The method of claim 1, wherein the decoder side motion vector
derivation tool comprises a bi-directional optical flow tool.
9. The method of claim 1, wherein the first tool is the decoder
side motion vector derivation tool, and the conversion comprises:
dividing the current block into at least one subblock; calculating
differences between two reference subblocks associated with a
subblock within the current block based on representative positions
of the reference subblocks; deriving the refinement of the motion
information based on the differences.
10. The method of claim 9, wherein the differences are calculated
in an early termination stage of the first tool.
11. The method of claim 9, wherein calculating differences
comprises: calculating differences of specific rows of two
reference subblocks.
12. The method of claim 11, wherein the specific rows are composed
of one row of every N rows, wherein N is greater than 1.
13. The method of claim 11, wherein N is equal to 2.
14. The method of claim 9, wherein the representative positions are
selected by using predetermined strategy.
15. The method of claim 9, wherein deriving the refinement of the
motion information based on the differences comprises: summing up
the differences calculated for the representative positions of the
reference sub-blocks to obtain the difference for the sub-block;
deriving the refinement of the motion information using the
difference for the subblock.
16. The method of claim 15, wherein the difference for the subblock
include sum of absolute differences (SAD) of the representative
positions.
17. An apparatus for processing video data comprising a processor
and a non-transitory memory with instructions thereon, wherein the
instructions upon execution by the processor, cause the processor
to: determine an enablement of a first tool, for a conversion
between a current block and a bitstream based on a rule, wherein
the rule indicates that the first tool and a second tool are not
both used by the current block; and perform, based on the
determining, a conversion between the current block and the
bitstream; wherein one of the first tool and the second tool is a
decoder side motion vector derivation tool to derive a refinement
of motion information related to the current block, and the other
of the first tool and the second tool comprises: a combined inter
and intra prediction tool in which a prediction signal of a block
is generated at least based on an intra prediction signal and an
inter prediction signal.
18. The apparatus of claim 17, wherein the other of the first tool
and the second tool further comprises: a merge mode with motion
vector differences which comprises motion vector expression
comprising a first parameter representing a motion vector
difference and a second parameter indicating a base candidate,
wherein the first parameter comprises a motion magnitude and a
motion direction.
19. A non-transitory computer-readable storage medium storing
instructions that cause a processor to: determine an enablement of
a first tool, for a conversion between a current block and a
bitstream based on a rule, wherein the rule indicates that the
first tool and a second tool are not both used by the current
block; and perform, based on the determining, a conversion between
the current block and the bitstream; wherein one of the first tool
and the second tool is a decoder side motion vector derivation tool
to derive a refinement of motion information related to the current
block, and the other of the first tool and the second tool
comprises: a combined inter and intra prediction tool in which a
prediction signal of a block is generated at least based on an
intra prediction signal and an inter prediction signal.
20. A non-transitory computer-readable recording medium storing a
bitstream which is generated by a method performed by a video
processing apparatus, wherein the method comprises: determining an
enablement of a first tool, for a conversion between a current
block and a bitstream based on a rule, wherein the rule indicates
that the first tool and a second tool are not both used by the
current block; and generating the bitstream from the current block
based on the determining; wherein one of the first tool and the
second tool is a decoder side motion vector derivation tool to
derive a refinement of motion information related to the current
block, and the other of the first tool and the second tool
comprises: a combined inter and intra prediction tool in which a
prediction signal of a block is generated at least based on an
intra prediction signal and an inter prediction signal.
Description
[0001] This application is a continuation of International
Application No. PCT/CN2019/119634, filed on Nov. 20, 2019, which
claims the priority to and benefits of International Patent
Applications No. PCT/CN2018/116371, filed on Nov. 20, 2018, No.
PCT/CN2019/070062, filed on Jan. 2, 2019, and No.
PCT/CN2019/072060, filed on Jan. 16, 2019. All the aforementioned
patent applications are hereby incorporated by reference in their
entireties.
TECHNICAL FIELD
[0002] This patent document relates to video coding techniques,
devices and systems.
BACKGROUND
[0003] Motion compensation (MC) is a technique in video processing
to predict a frame in a video, given the previous and/or future
frames by accounting for motion of the camera and/or objects in the
video. Motion compensation can be used in the encoding/decoding of
video data for video compression.
SUMMARY
[0004] This document discloses methods, systems, and devices
related to the use of motion compensation in video coding and
decoding.
[0005] In one example aspect, a method for video processing is
discloses. The method comprises: calculating, during a conversion
between a current block of video and a bitstream representation of
the current block, differences between two reference blocks
associated with the current block or differences between two
reference sub-blocks associated with a sub-block within the current
block based on representative positions of the reference blocks or
representative positions of the reference sub-blocks; and
performing the conversion based on the differences.
[0006] In one example aspect, a method for video processing is
discloses. The method comprises: making a decision, based on a
determination that a current block of a video is coded using a
specific coding mode, regarding a selective enablement of a decoder
side motion vector derivation (DMVD) tool for the current block,
wherein the DMVD tool derives a refinement of motion information
signaled in a bitstream representation of the video; and
performing, based on the decision, a conversion between the current
block and the bitstream representation.
[0007] In one example aspect, a video processing method is
disclosed. The method includes generating, using a multi-step
refinement process, multiple refinement values of motion vector
information based on decoded motion information from a bitstream
representation of a current video block, and reconstructing the
current video block or decoding other video blocks based on
multiple refinement values.
[0008] In another example aspect, another video processing method
is disclosed. The method includes performing, for conversion
between a current block and a bitstream representation of the
current block, a multi-step refinement process for a sub-block of
the current block and a temporal gradient modification process
between two prediction blocks of the sub-block, wherein, using the
multi-step refinement process, multiple refinement values of motion
vector information based on decoded motion information from a
bitstream representation of the current video block and performing
the conversion between the current block and the bitstream
representation based on refinement values.
[0009] In yet another example aspect, another video processing
method is disclosed. The method includes determining, using a
multi-step decoder-side motion vector refinement process a current
video block, a final motion vector and performing conversion
between the current block and the bitstream representation using
the final motion vector.
[0010] In yet another aspect, another method of video processing is
disclosed. The method includes applying, during conversion between
a current video block and a bitstream representation of the current
video block; multiple different motion vector refinement processes
to different sub-blocks of the current video block and performing
conversion between the current block and the bitstream
representation using a final motion vector for the current video
block generated from the multiple different motion vector
refinement processes.
[0011] In yet another aspect, another method of video processing is
disclosed. The method includes performing a conversion between a
current video block and a bitstream representation of the current
video block using a rule that limits a maximum number of sub-blocks
that a coding unit or a prediction unit in case that the current
video block is coded using a sub-block based coding tool,
[0012] wherein the sub-block based coding tool includes one or more
of affine coding, advanced temporal motion vector predictor,
bi-directional optical flow or a decoder-side motion vector
refinement coding tool.
[0013] In yet another example aspect, another method of video
processing is disclosed. The method includes performing a
conversion between a current video block and a bitstream
representation of the current video block using a rule that
specifies to use different partitioning for chroma components of
the current video block than a luma component of the current video
block in case that the current video block is coded using a
sub-block based coding tool, wherein the sub-block based coding
tool includes one or more of affine coding, advanced temporal
motion vector predictor, bi-directional optical flow or a
decoder-side motion vector refinement coding tool.
[0014] In yet another example aspect, another method of video
processing is disclosed. The method includes determining, in an
early termination stage of a bi-directional optical flow (BIO)
technique or a decoder-side motion vector refinement (DMVR)
technique, differences between reference video blocks associated
with a current video block, and performing further processing of
the current video block based on the differences.
[0015] In yet another representative aspect, the various techniques
described herein may be embodied as a computer program product
stored on a non-transitory computer readable media. The computer
program product includes program code for carrying out the methods
described herein.
[0016] In yet another representative aspect, a video decoder
apparatus may implement a method as described herein.
[0017] The details of one or more implementations are set forth in
the accompanying attachments, the drawings, and the description
below. Other features will be apparent from the description and
drawings, and from the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] FIG. 1 shows an example of a derivation process for merge
candidates list construction.
[0019] FIG. 2 shows example positions of spatial merge
candidates.
[0020] FIG. 3 shows examples of Candidate pairs considered for
redundancy check of spatial merge candidates.
[0021] FIG. 4 shows example Positions for the second PU of
N.times.2N and 2N.times.N partitions.
[0022] FIG. 5 is an Illustration of motion vector scaling for
temporal merge candidate.
[0023] FIG. 6 shows examples of Candidate positions for temporal
merge candidate, C0 and C1.
[0024] FIG. 7 shows an example of combined bi-predictive merge
candidate
[0025] FIG. 8 shows an example of a derivation process for motion
vector prediction candidates.
[0026] FIG. 9 is an example illustration of motion vector scaling
for spatial motion vector candidate.
[0027] FIG. 10 illustrates an example of advanced temporal motion
vector predictor (ATMVP) for a Coding Unit (CU).
[0028] FIG. 11 shows an Example of one CU with four sub-blocks
(A-D) and its neighbouring blocks (a-d).
[0029] FIG. 12 is an example Illustration of sub-blocks where OBMC
applies.
[0030] FIG. 13 shows an example of Neighbouring samples used for
deriving IC parameters.
[0031] FIG. 14 shows an example of a simplified affine motion
model.
[0032] FIG. 15 shows an example of affine MVF per sub-block.
[0033] FIG. 16 shows an example of a Motion Vector Predictor (MV)
for AF_INTER mode.
[0034] FIG. 17A-17B shows examples of candidates for AF_MERGE
mode.
[0035] FIG. 18 shows example process for bilateral matching.
[0036] FIG. 19 shows example process of template matching.
[0037] FIG. 20 illustrates an implementation of unilateral motion
estimation (ME) in frame rate upconversion (FRUC).
[0038] FIG. 21 illustrates an embodiment of an Ultimate Motion
Vector Expression (UMVE) search process.
[0039] FIG. 22 shows examples of UMVE search points.
[0040] FIG. 23 shows an example of distance index and distance
offset mapping.
[0041] FIG. 24 shows an example of an optical flow trajectory.
[0042] FIG. 25A-25B show examples of Bi-directional Optical flow
(BIO) w/o block extension: a) access positions outside of the
block; b) padding used in order to avoid extra memory access and
calculation.
[0043] FIG. 26 illustrates an example of using Decoder-side motion
vector refinement (DMVR) based on bilateral template matching.
[0044] FIG. 27 shows an example of interweaved prediction.
[0045] FIG. 28 shows an example of iterative motion vector
refinement for BIO.
[0046] FIG. 29 is a block diagram of a hardware platform for
implementing the video coding or decoding techniques described in
the present document.
[0047] FIG. 30 shows an example of a hardware platform for
implementing methods and techniques described in the present
document.
[0048] FIG. 31 is a flowchart of an example method of video
processing.
[0049] FIG. 32 is a flowchart of an example method of video
processing.
[0050] FIG. 33 is a flowchart of an example method of video
processing.
DETAILED DESCRIPTION
[0051] The present document provides several techniques that can be
embodied into digital video encoders and decoders. Section headings
are used in the present document for clarity of understanding and
do not limit scope of the techniques and embodiments disclosed in
each section only to that section.
1. Summary
[0052] The present document is related to video coding
technologies. Specifically, it is related to motion compensation in
video coding. The disclosed techniques may be applied to the
existing video coding standard like HEVC, or the standard
(Versatile Video Coding) to be finalized. It may be also applicable
to future video coding standards or video codec.
[0053] In the present document, the term "video processing" may
refer to video encoding, video decoding, video compression or video
decompression. For example, video compression algorithms may be
applied during conversion from pixel representation of a video to a
corresponding bitstream representation or vice versa.
2. Introduction
[0054] Video coding standards have evolved primarily through the
development of the well-known ITU-T and ISO/IEC standards. The
ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-1 and MPEG-4
Visual, and the two organizations jointly produced the H.262/MPEG-2
Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC
standards. Since H.262, the video coding standards are based on the
hybrid video coding structure wherein temporal prediction plus
transform coding are utilized. To explore the future video coding
technologies beyond HEVC, Joint Video Exploration Team (JVET) was
founded by VCEG and MPEG jointly in 2015. Since then, many new
methods have been adopted by JVET and put into the reference
software named Joint Exploration Model (JEM). In April 2018, the
Joint Video Expert Team (JVET) between VCEG (Q6/16) and ISO/IEC
JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard
targeting at 50% bitrate reduction compared to HEVC.
[0055] 2.1 Inter Prediction in HEVC/H.265
[0056] Each inter-predicted PU has motion parameters for one or two
reference picture lists. Motion parameters include a motion vector
and a reference picture index. Usage of one of the two reference
picture lists may also be signalled using inter_pred_idc. Motion
vectors may be explicitly coded as deltas relative to
predictors.
[0057] When a CU is coded with skip mode, one PU is associated with
the CU, and there are no significant residual coefficients, no
coded motion vector delta or reference picture index. A merge mode
is specified whereby the motion parameters for the current PU are
obtained from neighbouring PUs, including spatial and temporal
candidates. The merge mode can be applied to any inter-predicted
PU, not only for skip mode. The alternative to merge mode is the
explicit transmission of motion parameters, where motion vector (to
be more precise, motion vector difference compared to a motion
vector predictor), corresponding reference picture index for each
reference picture list and reference picture list usage are
signalled explicitly per each PU. Such a mode is named Advanced
motion vector prediction (AMVP) in this disclosure.
[0058] When signalling indicates that one of the two reference
picture lists is to be used, the PU is produced from one block of
samples. This is referred to as `uni-prediction`. Uni-prediction is
available both for P-slices and B-slices.
[0059] When signalling indicates that both of the reference picture
lists are to be used, the PU is produced from two blocks of
samples. This is referred to as `bi-prediction`. Bi-prediction is
available for B-slices only.
[0060] The following text provides the details on the inter
prediction modes specified in HEVC. The description will start with
the merge mode.
[0061] 2.1.1 Merge Mode
[0062] 2.1.1.1 Derivation of Candidates for Merge Mode
[0063] When a PU is predicted using merge mode, an index pointing
to an entry in the merge candidates list is parsed from the
bitstream and used to retrieve the motion information. The
construction of this list is specified in the HEVC standard and can
be summarized according to the following sequence of steps:
[0064] Step 1: Initial candidates derivation [0065] Step 1.1:
Spatial candidates derivation [0066] Step 1.2: Redundancy check for
spatial candidates [0067] Step 1.3: Temporal candidates
derivation
[0068] Step 2: Additional candidates insertion [0069] Step 2.1:
Creation of bi-predictive candidates [0070] Step 2.2: Insertion of
zero motion candidates
[0071] These steps are also schematically depicted in FIG. 1. For
spatial merge candidate derivation, a maximum of four merge
candidates are selected among candidates that are located in five
different positions. For temporal merge candidate derivation, a
maximum of one merge candidate is selected among two candidates.
Since constant number of candidates for each PU is assumed at
decoder, additional candidates are generated when the number of
candidates obtained from step 1 does not reach the maximum number
of merge candidate (MaxNumMergeCand) which is signalled in slice
header. Since the number of candidates is constant, index of best
merge candidate is encoded using truncated unary binarization (TU).
If the size of CU is equal to 8, all the PUs of the current CU
share a single merge candidate list, which is identical to the
merge candidate list of the 2N.times.2N prediction unit.
[0072] In the following, the operations associated with the
aforementioned steps are detailed.
[0073] 2.1.1.2 Spatial Candidates Derivation
[0074] In the derivation of spatial merge candidates, a maximum of
four merge candidates are selected among candidates located in the
positions depicted in FIG. 2. The order of derivation is A.sub.1,
B.sub.1, B.sub.0, A.sub.0 and B.sub.2. Position B.sub.2 is
considered only when any PU of position A.sub.1, B.sub.1, B.sub.0,
A.sub.0 is not available (e.g. because it belongs to another slice
or tile) or is intra coded. After candidate at position A.sub.1 is
added, the addition of the remaining candidates is subject to a
redundancy check which ensures that candidates with same motion
information are excluded from the list so that coding efficiency is
improved. To reduce computational complexity, not all possible
candidate pairs are considered in the mentioned redundancy check.
Instead only the pairs linked with an arrow in FIG. 3 are
considered and a candidate is only added to the list if the
corresponding candidate used for redundancy check has not the same
motion information. Another source of duplicate motion information
is the "second PU" associated with partitions different from
2N.times.2N. As an example, FIG. 4 depicts the second PU for the
case of N.times.2N and 2N.times.N, respectively. When the current
PU is partitioned as N.times.2N, candidate at position A.sub.1 is
not considered for list construction. In fact, by adding this
candidate will lead to two prediction units having the same motion
information, which is redundant to just have one PU in a coding
unit. Similarly, position B.sub.1 is not considered when the
current PU is partitioned as 2N.times.N.
[0075] 2.1.1.3 Temporal Candidates Derivation
[0076] In this step, only one candidate is added to the list.
Particularly, in the derivation of this temporal merge candidate, a
scaled motion vector is derived based on co-located PU belonging to
the picture which has the smallest POC difference with current
picture within the given reference picture list. The reference
picture list to be used for derivation of the co-located PU is
explicitly signalled in the slice header. The scaled motion vector
for temporal merge candidate is obtained as illustrated by the
dotted line in FIG. 5 which is scaled from the motion vector of the
co-located PU using the POC distances, tb and td, where tb is
defined to be the POC difference between the reference picture of
the current picture and the current picture and td is defined to be
the POC difference between the reference picture of the co-located
picture and the co-located picture. The reference picture index of
temporal merge candidate is set equal to zero. A practical
realization of the scaling process is described in the HEVC
specification. For a B-slice, two motion vectors, one is for
reference picture list 0 and the other is for reference picture
list 1, are obtained and combined to make the bi-predictive merge
candidate.
[0077] In the co-located PU (Y) belonging to the reference frame,
the position for the temporal candidate is selected between
candidates C.sub.0 and C.sub.1, as depicted in FIG. 6. If PU at
position C.sub.0 is not available, is intra coded, or is outside of
the current CTU row, position C.sub.1 is used. Otherwise, position
C.sub.0 is used in the derivation of the temporal merge
candidate.
[0078] 2.1.1.4 Additional Candidates Insertion
[0079] Besides spatial and temporal merge candidates, there are two
additional types of merge candidates: combined bi-predictive merge
candidate and zero merge candidate. Combined bi-predictive merge
candidates are generated by utilizing spatial and temporal merge
candidates. Combined bi-predictive merge candidate is used for
B-Slice only. The combined bi-predictive candidates are generated
by combining the first reference picture list motion parameters of
an initial candidate with the second reference picture list motion
parameters of another. If these two tuples provide different motion
hypotheses, they will form a new bi-predictive candidate. As an
example, FIG. 7 depicts the case when two candidates in the
original list (on the left), which have mvL0 and refIdxL0 or mvL1
and refIdxL1, are used to create a combined bi-predictive merge
candidate added to the final list (on the right). There are
numerous rules regarding the combinations which are considered to
generate these additional merge candidates.
[0080] Zero motion candidates are inserted to fill the remaining
entries in the merge candidates list and therefore hit the
MaxNumMergeCand capacity. These candidates have zero spatial
displacement and a reference picture index which starts from zero
and increases every time a new zero motion candidate is added to
the list. The number of reference frames used by these candidates
is one and two for uni and bi-directional prediction, respectively.
Finally, no redundancy check is performed on these candidates.
[0081] 2.1.1.5 Motion Estimation Regions for Parallel
Processing
[0082] To speed up the encoding process, motion estimation can be
performed in parallel whereby the motion vectors for all prediction
units inside a given region are derived simultaneously. The
derivation of merge candidates from spatial neighbourhood may
interfere with parallel processing as one prediction unit cannot
derive the motion parameters from an adjacent PU until its
associated motion estimation is completed. To mitigate the
trade-off between coding efficiency and processing latency, HEVC
defines the motion estimation region (MER) whose size is signalled
in the picture parameter set using the "log
2_parallel_merge_level_minus2" syntax element. When a MER is
defined, merge candidates falling in the same region are marked as
unavailable and therefore not considered in the list
construction.
[0083] 2.1.2 AMVP
[0084] AMVP exploits spatio-temporal correlation of motion vector
with neighbouring PUs, which is used for explicit transmission of
motion parameters. For each reference picture list, a motion vector
candidate list is constructed by firstly checking availability of
left, above temporally neighbouring PU positions, removing
redundant candidates and adding zero vector to make the candidate
list to be constant length. Then, the encoder can select the best
predictor from the candidate list and transmit the corresponding
index indicating the chosen candidate. Similarly with merge index
signalling, the index of the best motion vector candidate is
encoded using truncated unary. The maximum value to be encoded in
this case is 2 (see FIG. 8). In the following sections, details
about derivation process of motion vector prediction candidate are
provided.
[0085] FIG. 8 summarizes derivation process for motion vector
prediction candidate.
[0086] 2.1.2.1 Derivation of AMVP Candidates
[0087] In motion vector prediction, two types of motion vector
candidates are considered: spatial motion vector candidate and
temporal motion vector candidate. For spatial motion vector
candidate derivation, two motion vector candidates are eventually
derived based on motion vectors of each PU located in five
different positions as depicted in FIG. 2.
[0088] For temporal motion vector candidate derivation, one motion
vector candidate is selected from two candidates, which are derived
based on two different co-located positions. After the first list
of spatio-temporal candidates is made, duplicated motion vector
candidates in the list are removed. If the number of potential
candidates is larger than two, motion vector candidates whose
reference picture index within the associated reference picture
list is larger than 1 are removed from the list. If the number of
spatio-temporal motion vector candidates is smaller than two,
additional zero motion vector candidates is added to the list.
[0089] 2.1.2.2 Spatial Motion Vector Candidates
[0090] In the derivation of spatial motion vector candidates, a
maximum of two candidates are considered among five potential
candidates, which are derived from PUs located in positions as
depicted in FIG. 2, those positions being the same as those of
motion merge. The order of derivation for the left side of the
current PU is defined as A.sub.0, A.sub.1, and scaled A.sub.0,
scaled A.sub.1. The order of derivation for the above side of the
current PU is defined as B.sub.0, B.sub.1, B.sub.2, scaled B.sub.0,
scaled B.sub.1, scaled B.sub.2. For each side there are therefore
four cases that can be used as motion vector candidate, with two
cases not required to use spatial scaling, and two cases where
spatial scaling is used. The four different cases are summarized as
follows. [0091] No spatial scaling [0092] (1) Same reference
picture list, and same reference picture index (same POC) [0093]
(2) Different reference picture list, but same reference picture
(same POC) [0094] Spatial scaling [0095] (3) Same reference picture
list, but different reference picture (different POC) [0096] (4)
Different reference picture list, and different reference picture
(different POC)
[0097] The no-spatial-scaling cases are checked first followed by
the spatial scaling. Spatial scaling is considered when the POC is
different between the reference picture of the neighbouring PU and
that of the current PU regardless of reference picture list. If all
PUs of left candidates are not available or are intra coded,
scaling for the above motion vector is allowed to help parallel
derivation of left and above MV candidates. Otherwise, spatial
scaling is not allowed for the above motion vector.
[0098] In a spatial scaling process, the motion vector of the
neighbouring PU is scaled in a similar manner as for temporal
scaling, as depicted as FIG. 9. The main difference is that the
reference picture list and index of current PU is given as input;
the actual scaling process is the same as that of temporal
scaling.
[0099] 2.1.2.3 Temporal Motion Vector Candidates
[0100] Apart for the reference picture index derivation, all
processes for the derivation of temporal merge candidates are the
same as for the derivation of spatial motion vector candidates (see
FIG. 6). The reference picture index is signalled to the
decoder.
[0101] 2.2 New Inter Prediction Methods in JEM
[0102] 2.2.1 Sub-CU Based Motion Vector Prediction
[0103] In the JEM with QTBT, each CU can have at most one set of
motion parameters for each prediction direction. Two sub-CU level
motion vector prediction methods are considered in the encoder by
splitting a large CU into sub-CUs and deriving motion information
for all the sub-CUs of the large CU. Alternative temporal motion
vector prediction (ATMVP) method allows each CU to fetch multiple
sets of motion information from multiple blocks smaller than the
current CU in the collocated reference picture. In spatial-temporal
motion vector prediction (STMVP) method motion vectors of the
sub-CUs are derived recursively by using the temporal motion vector
predictor and spatial neighbouring motion vector.
[0104] To preserve more accurate motion field for sub-CU motion
prediction, the motion compression for the reference frames is
currently disabled.
[0105] 2.2.1.1 Alternative Temporal Motion Vector Prediction
[0106] In the alternative temporal motion vector prediction (ATMVP)
method, the motion vectors temporal motion vector prediction (TMVP)
is modified by fetching multiple sets of motion information
(including motion vectors and reference indices) from blocks
smaller than the current CU. As shown in the FIG. 10, the sub-CUs
are square N.times.N blocks (N is set to 4 by default).
[0107] ATMVP predicts the motion vectors of the sub-CUs within a CU
in two steps. The first step is to identify the corresponding block
in a reference picture with a so-called temporal vector. The
reference picture is called the motion source picture. The second
step is to split the current CU into sub-CUs and obtain the motion
vectors as well as the reference indices of each sub-CU from the
block corresponding to each sub-CU, as shown in FIG. 10.
[0108] In the first step, a reference picture and the corresponding
block is determined by the motion information of the spatial
neighbouring blocks of the current CU. To avoid the repetitive
scanning process of neighbouring blocks, the first merge candidate
in the merge candidate list of the current CU is used. The first
available motion vector as well as its associated reference index
are set to be the temporal vector and the index to the motion
source picture. This way, in ATMVP, the corresponding block may be
more accurately identified, compared with TMVP, wherein the
corresponding block (sometimes called collocated block) is always
in a bottom-right or center position relative to the current
CU.
[0109] In the second step, a corresponding block of the sub-CU is
identified by the temporal vector in the motion source picture, by
adding to the coordinate of the current CU the temporal vector. For
each sub-CU, the motion information of its corresponding block (the
smallest motion grid that covers the center sample) is used to
derive the motion information for the sub-CU. After the motion
information of a corresponding N.times.N block is identified, it is
converted to the motion vectors and reference indices of the
current sub-CU, in the same way as TMVP of HEVC, wherein motion
scaling and other procedures apply. For example, the decoder checks
whether the low-delay condition (i.e. the POCs of all reference
pictures of the current picture are smaller than the POC of the
current picture) is fulfilled and possibly uses motion vector
MV.sub.x (the motion vector corresponding to reference picture list
X) to predict motion vector MV.sub.y (with X being equal to 0 or 1
and Y being equal to 1-X) for each sub-CU.
[0110] 2.2.1.2 Spatial-Temporal Motion Vector Prediction
[0111] In this method, the motion vectors of the sub-CUs are
derived recursively, following raster scan order. FIG. 11
illustrates this concept. Let us consider an 8.times.8 CU which
contains four 4.times.4 sub-CUs A, B, C, and D. The neighbouring
4.times.4 blocks in the current frame are labelled as a, b, c, and
d.
[0112] The motion derivation for sub-CU A starts by identifying its
two spatial neighbours. The first neighbour is the N.times.N block
above sub-CU A (block c). If this block c is not available or is
intra coded the other N.times.N blocks above sub-CU A are checked
(from left to right, starting at block c). The second neighbour is
a block to the left of the sub-CU A (block b). If block b is not
available or is intra coded other blocks to the left of sub-CU A
are checked (from top to bottom, staring at block b). The motion
information obtained from the neighbouring blocks for each list is
scaled to the first reference frame for a given list. Next,
temporal motion vector predictor (TMVP) of sub-block A is derived
by following the same procedure of TMVP derivation as specified in
HEVC. The motion information of the collocated block at location D
is fetched and scaled accordingly. Finally, after retrieving and
scaling the motion information, all available motion vectors (up to
3) are averaged separately for each reference list. The averaged
motion vector is assigned as the motion vector of the current
sub-CU.
[0113] FIG. 11 shows an example of one CU with four sub-blocks
(A-D) and its neighbouring blocks (a-d).
[0114] 2.2.1.3 Sub-CU Motion Prediction Mode Signalling
[0115] The sub-CU modes are enabled as additional merge candidates
and there is no additional syntax element required to signal the
modes. Two additional merge candidates are added to merge
candidates list of each CU to represent the ATMVP mode and STMVP
mode. Up to seven merge candidates are used, if the sequence
parameter set indicates that ATMVP and STMVP are enabled. The
encoding logic of the additional merge candidates is the same as
for the merge candidates in the HM, which means, for each CU in P
or B slice, two more RD checks is needed for the two additional
merge candidates.
[0116] In the JEM, all bins of merge index is context coded by
CABAC. While in HEVC, only the first bin is context coded and the
remaining bins are context by-pass coded.
[0117] 2.2.2 Adaptive Motion Vector Difference Resolution
[0118] In HEVC, motion vector differences (MVDs) (between the
motion vector and predicted motion vector of a PU) are signalled in
units of quarter luma samples when use_integer_mv_flag is equal to
0 in the slice header. In the JEM, a locally adaptive motion vector
resolution (LAMVR) is introduced. In the JEM, MVD can be coded in
units of quarter luma samples, integer luma samples or four luma
samples. The MVD resolution is controlled at the coding unit (CU)
level, and MVD resolution flags are conditionally signalled for
each CU that has at least one non-zero MVD components.
[0119] For a CU that has at least one non-zero MVD components, a
first flag is signalled to indicate whether quarter luma sample MV
precision is used in the CU. When the first flag (equal to 1)
indicates that quarter luma sample MV precision is not used,
another flag is signalled to indicate whether integer luma sample
MV precision or four luma sample MV precision is used.
[0120] When the first MVD resolution flag of a CU is zero, or not
coded for a CU (meaning all MVDs in the CU are zero), the quarter
luma sample MV resolution is used for the CU. When a CU uses
integer-luma sample MV precision or four-luma-sample MV precision,
the MVPs in the AMVP candidate list for the CU are rounded to the
corresponding precision.
[0121] In the encoder, CU-level RD checks are used to determine
which MVD resolution is to be used for a CU. That is, the CU-level
RD check is performed three times for each MVD resolution. To
accelerate encoder speed, the following encoding schemes are
applied in the JEM. [0122] During RD check of a CU with normal
quarter luma sample MVD resolution, the motion information of the
current CU (integer luma sample accuracy) is stored. The stored
motion information (after rounding) is used as the starting point
for further small range motion vector refinement during the RD
check for the same CU with integer luma sample and 4 luma sample
MVD resolution so that the time-consuming motion estimation process
is not duplicated three times. [0123] RD check of a CU with 4 luma
sample MVD resolution is conditionally invoked. For a CU, when RD
cost integer luma sample MVD resolution is much larger than that of
quarter luma sample MVD resolution, the RD check of 4 luma sample
MVD resolution for the CU is skipped.
[0124] 2.2.3 Higher motion vector storage accuracy
[0125] In HEVC, motion vector accuracy is one-quarter pel
(one-quarter luma sample and one-eighth chroma sample for 4:2:0
video). In the JEM, the accuracy for the internal motion vector
storage and the merge candidate increases to 1/16 pel. The higher
motion vector accuracy ( 1/16 pel) is used in motion compensation
inter prediction for the CU coded with skip/merge mode. For the CU
coded with normal AMVP mode, either the integer-pel or quarter-pel
motion is used.
[0126] SHVC upsampling interpolation filters, which have same
filter length and normalization factor as HEVC motion compensation
interpolation filters, are used as motion compensation
interpolation filters for the additional fractional pel positions.
The chroma component motion vector accuracy is 1/32 sample in the
JEM, the additional interpolation filters of 1/32 pel fractional
positions are derived by using the average of the filters of the
two neighbouring 1/16 pel fractional positions.
[0127] 2.2.4 Overlapped Block Motion Compensation
[0128] Overlapped Block Motion Compensation (OBMC) has previously
been used in H.263. In the JEM, unlike in H.263, OBMC can be
switched on and off using syntax at the CU level. When OBMC is used
in the JEM, the OBMC is performed for all motion compensation (MC)
block boundaries except the right and bottom boundaries of a CU.
Moreover, it is applied for both the luma and chroma components. In
the JEM, a MC block is corresponding to a coding block. When a CU
is coded with sub-CU mode (includes sub-CU merge, affine and FRUC
mode), each sub-block of the CU is a MC block. To process CU
boundaries in a uniform fashion, OBMC is performed at sub-block
level for all MC block boundaries, where sub-block size is set
equal to 4.times.4, as illustrated in FIG. 12.
[0129] When OBMC applies to the current sub-block, besides current
motion vectors, motion vectors of four connected neighbouring
sub-blocks, if available and are not identical to the current
motion vector, are also used to derive prediction block for the
current sub-block. These multiple prediction blocks based on
multiple motion vectors are combined to generate the final
prediction signal of the current sub-block.
[0130] Prediction block based on motion vectors of a neighbouring
sub-block is denoted as P.sub.N, with N indicating an index for the
neighbouring above, below, left and right sub-blocks and prediction
block based on motion vectors of the current sub-block is denoted
as P.sub.C. When P.sub.N is based on the motion information of a
neighbouring sub-block that contains the same motion information to
the current sub-block, the OBMC is not performed from P.sub.N.
Otherwise, every sample of P.sub.N is added to the same sample in
Pc, i.e., four rows/columns of P.sub.N are added to P.sub.C. The
weighting factors {1/4, 1/8, 1/16, 1/32} are used for P.sub.N and
the weighting factors {3/4, 7/8, 15/16, 31/32} are used for
P.sub.C. The exception are small MC blocks, (i.e., when height or
width of the coding block is equal to 4 or a CU is coded with
sub-CU mode), for which only two rows/columns of P.sub.N are added
to P.sub.C. In this case weighting factors {1/4, 1/8} are used for
P.sub.N and weighting factors {3/4, 7/8} are used for P.sub.C. For
P.sub.N generated based on motion vectors of vertically
(horizontally) neighbouring sub-block, samples in the same row
(column) of P.sub.N are added to P.sub.C with a same weighting
factor.
[0131] In the JEM, for a CU with size less than or equal to 256
luma samples, a CU level flag is signalled to indicate whether OBMC
is applied or not for the current CU. For the CUs with size larger
than 256 luma samples or not coded with AMVP mode, OBMC is applied
by default. At the encoder, when OBMC is applied for a CU, its
impact is taken into account during the motion estimation stage.
The prediction signal formed by OBMC using motion information of
the top neighbouring block and the left neighbouring block is used
to compensate the top and left boundaries of the original signal of
the current CU, and then the normal motion estimation process is
applied.
[0132] 2.2.5 Local Illumination Compensation
[0133] Local Illumination Compensation (LIC) is based on a linear
model for illumination changes, using a scaling factor a and an
offset b. And it is enabled or disabled adaptively for each
inter-mode coded coding unit (CU).
[0134] When LIC applies for a CU, a least square error method is
employed to derive the parameters a and b by using the neighbouring
samples of the current CU and their corresponding reference
samples. More specifically, as illustrated in FIG. 13, the
subsampled (2:1 subsampling) neighbouring samples of the CU and the
corresponding samples (identified by motion information of the
current CU or sub-CU) in the reference picture are used. The IC
parameters are derived and applied for each prediction direction
separately.
[0135] When a CU is coded with merge mode, the LIC flag is copied
from neighbouring blocks, in a way similar to motion information
copy in merge mode; otherwise, an LIC flag is signalled for the CU
to indicate whether LIC applies or not.
[0136] When LIC is enabled for a picture, additional CU level RD
check is needed to determine whether LIC is applied or not for a
CU. When LIC is enabled for a CU, mean-removed sum of absolute
difference (MR-SAD) and mean-removed sum of absolute
Hadamard-transformed difference (MR-SATD) are used, instead of SAD
and SATD, for integer pel motion search and fractional pel motion
search, respectively.
[0137] To reduce the encoding complexity, the following encoding
scheme is applied in the JEM. [0138] LIC is disabled for the entire
picture when there is no obvious illumination change between a
current picture and its reference pictures. To identify this
situation, histograms of a current picture and every reference
picture of the current picture are calculated at the encoder. If
the histogram difference between the current picture and every
reference picture of the current picture is smaller than a given
threshold, LIC is disabled for the current picture; otherwise, LIC
is enabled for the current picture.
[0139] 2.2.6 Affine Motion Compensation Prediction
[0140] In HEVC, only translation motion model is applied for motion
compensation prediction (MCP). While in the real world, there are
many kinds of motion, e.g. zoom in/out, rotation, perspective
motions and the other irregular motions. In the JEM, a simplified
affine transform motion compensation prediction is applied. As
shown FIG. 14, the affine motion field of the block is described by
two control point motion vectors.
[0141] The motion vector field (MVF) of a block is described by the
following equation:
{ v x = ( v 1 .times. x - v 0 .times. x ) w .times. x - ( v 1
.times. y - v 0 .times. y ) w .times. y + v 0 .times. x v y = ( v 1
.times. y - v 0 .times. y ) w .times. x + ( v 1 .times. x - v 0
.times. x ) w .times. y + v 0 .times. y ( 1 ) ##EQU00001##
[0142] Where (v.sub.0x, v.sub.0y) is motion vector of the top-left
corner control point, and (v.sub.1x, v.sub.1y) is motion vector of
the top-right corner control point.
[0143] In order to further simplify the motion compensation
prediction, sub-block based affine transform prediction is applied.
The sub-block size M.times.N is derived as in Equation 2, where
MvPre is the motion vector fraction accuracy ( 1/16 in JEM),
(v.sub.2x, v.sub.2y) is motion vector of the bottom-left control
point, calculated according to Equation 1.
{ M = clip .times. .times. 3 .times. .times. ( 4 , w , .times. w
.times. MvPre max .function. ( abs .times. .times. ( v 1 .times. x
- v 0 .times. x ) , abs .function. ( v 1 .times. y - v 0 .times. y
) ) ) N = clip .times. .times. 3 .times. .times. ( 4 , h , .times.
h .times. MvPre max .function. ( abs .times. .times. ( v 2 .times.
x - v 0 .times. x ) , abs .function. ( v 2 .times. y - v 0 .times.
y ) ) ) ( 2 ) ##EQU00002##
[0144] After derived by Equation 2, M and N should be adjusted
downward if necessary to make it a divisor of w and h,
respectively.
[0145] To derive motion vector of each M.times.N sub-block, the
motion vector of the center sample of each sub-block, as shown in
FIG. 15, is calculated according to Equation 1, and rounded to 1/16
fraction accuracy. Then the motion compensation interpolation
filters mentioned in previous section [00111] are applied to
generate the prediction of each sub-block with derived motion
vector.
[0146] After MCP, the high accuracy motion vector of each sub-block
is rounded and saved as the same accuracy as the normal motion
vector.
[0147] In the JEM, there are two affine motion modes: AF_INTER mode
and AF_MERGE mode. For CUs with both width and height larger than
8, AF_INTER mode can be applied. An affine flag in CU level is
signalled in the bitstream to indicate whether AF_INTER mode is
used. In this mode, a candidate list with motion vector pair
{(v.sub.0, v.sub.1)|v.sub.0={v.sub.A, v.sub.B, v.sub.C},
v.sub.1={v.sub.D, v.sub.E}} is constructed using the neighbour
blocks. As shown in FIG. 16, v.sub.0 is selected from the motion
vectors of the block A, B or C. The motion vector from the
neighbour block is scaled according to the reference list and the
relationship among the POC of the reference for the neighbour
block, the POC of the reference for the current CU and the POC of
the current CU. And the approach to select v.sub.1 from the
neighbour block D and E is similar. If the number of candidate list
is smaller than 2, the list is padded by the motion vector pair
composed by duplicating each of the AMVP candidates. When the
candidate list is larger than 2, the candidates are firstly sorted
according to the consistency of the neighbouring motion vectors
(similarity of the two motion vectors in a pair candidate) and only
the first two candidates are kept. An RD cost check is used to
determine which motion vector pair candidate is selected as the
control point motion vector prediction (CPMVP) of the current CU.
And an index indicating the position of the CPMVP in the candidate
list is signalled in the bitstream. After the CPMVP of the current
affine CU is determined, affine motion estimation is applied and
the control point motion vector (CPMV) is found. Then the
difference of the CPMV and the CPMVP is signalled in the
bitstream.
[0148] When a CU is applied in AF_MERGE mode, it gets the first
block coded with affine mode from the valid neighbour reconstructed
blocks. And the selection order for the candidate block is from
left, above, above right, left bottom to above left as shown in
FIG. 17A. If the neighbour left bottom block A is coded in affine
mode as shown in FIG. 17B, the motion vectors v.sub.2, v.sub.3 and
v.sub.4 of the top left corner, above right corner and left bottom
corner of the CU which contains the block A are derived. And the
motion vector v.sub.0 of the top left corner on the current CU is
calculated according to v.sub.2, v.sub.3 and v.sub.4. Secondly, the
motion vector v.sub.1 of the above right of the current CU is
calculated.
[0149] After the CPMV of the current CU v.sub.0 and v.sub.1 are
derived, according to the simplified affine motion model Equation
1, the MVF of the current CU is generated. In order to identify
whether the current CU is coded with AF_MERGE mode, an affine flag
is signalled in the bitstream when there is at least one neighbour
block is coded in affine mode.
[0150] 2.2.7 Pattern Matched Motion Vector Derivation
[0151] Pattern matched motion vector derivation (PMMVD) mode is a
special merge mode based on Frame-Rate Up Conversion (FRUC)
techniques. With this mode, motion information of a block is not
signalled but derived at decoder side.
[0152] A FRUC flag is signalled for a CU when its merge flag is
true. When the FRUC flag is false, a merge index is signalled and
the regular merge mode is used. When the FRUC flag is true, an
additional FRUC mode flag is signalled to indicate which method
(bilateral matching or template matching) is to be used to derive
motion information for the block.
[0153] At encoder side, the decision on whether using FRUC merge
mode for a CU is based on RD cost selection as done for normal
merge candidate. That is the two matching modes (bilateral matching
and template matching) are both checked for a CU by using RD cost
selection. The one leading to the minimal cost is further compared
to other CU modes. If a FRUC matching mode is the most efficient
one, FRUC flag is set to true for the CU and the related matching
mode is used.
[0154] Motion derivation process in FRUC merge mode has two steps.
A CU-level motion search is first performed, then followed by a
Sub-CU level motion refinement. At CU level, an initial motion
vector is derived for the whole CU based on bilateral matching or
template matching. First, a list of MV candidates is generated and
the candidate which leads to the minimum matching cost is selected
as the starting point for further CU level refinement. Then a local
search based on bilateral matching or template matching around the
starting point is performed and the MV results in the minimum
matching cost is taken as the MV for the whole CU. Subsequently,
the motion information is further refined at sub-CU level with the
derived CU motion vectors as the starting points.
[0155] For example, the following derivation process is performed
for a W.times.H CU motion information derivation. At the first
stage, MV for the whole W.times.H CU is derived. At the second
stage, the CU is further split into M.times.M sub-CUs. The value of
M is calculated as in (3), D is a predefined splitting depth which
is set to 3 by default in the JEM. Then the MV for each sub-CU is
derived.
M = max .times. { 4 , min .times. { M 2 D , .times. N 2 D } } ( 3 )
##EQU00003##
[0156] As shown in the FIG. 18, the bilateral matching is used to
derive motion information of the current CU by finding the closest
match between two blocks along the motion trajectory of the current
CU in two different reference pictures. Under the assumption of
continuous motion trajectory, the motion vectors MV0 and MV1
pointing to the two reference blocks shall be proportional to the
temporal distances, i.e., TD0 and TD1, between the current picture
and the two reference pictures. As a special case, when the current
picture is temporally between the two reference pictures and the
temporal distance from the current picture to the two reference
pictures is the same, the bilateral matching becomes mirror based
bi-directional MV.
[0157] As shown in FIG. 19, template matching is used to derive
motion information of the current CU by finding the closest match
between a template (top and/or left neighbouring blocks of the
current CU) in the current picture and a block (same size to the
template) in a reference picture. Except the aforementioned FRUC
merge mode, the template matching is also applied to AMVP mode. In
the JEM, as done in HEVC, AMVP has two candidates. With template
matching method, a new candidate is derived. If the newly derived
candidate by template matching is different to the first existing
AMVP candidate, it is inserted at the very beginning of the AMVP
candidate list and then the list size is set to two (meaning remove
the second existing AMVP candidate). When applied to AMVP mode,
only CU level search is applied.
[0158] 2.2.7.1 CU Level MV Candidate Set
[0159] The MV candidate set at CU level consists of: [0160] (i)
Original AMVP candidates if the current CU is in AMVP mode [0161]
(ii) all merge candidates, [0162] (iii) several MVs in the
interpolated MV field, which is introduced in previous
section[00155]. [0163] (iv) top and left neighbouring motion
vectors
[0164] When using bilateral matching, each valid MV of a merge
candidate is used as an input to generate a MV pair with the
assumption of bilateral matching. For example, one valid MV of a
merge candidate is (MVa, refa) at reference list A. Then the
reference picture refb of its paired bilateral MV is found in the
other reference list B so that refa and refb are temporally at
different sides of the current picture. If such a refb is not
available in reference list B, refb is determined as a reference
which is different from refa and its temporal distance to the
current picture is the minimal one in list B. After refb is
determined, MVb is derived by scaling MVa based on the temporal
distance between the current picture and refa, refb.
[0165] Four MVs from the interpolated MV field are also added to
the CU level candidate list. More specifically, the interpolated
MVs at the position (0, 0), (W/2, 0), (0, H/2) and (W/2, H/2) of
the current CU are added.
[0166] When FRUC is applied in AMVP mode, the original AMVP
candidates are also added to CU level MV candidate set.
[0167] At the CU level, up to 15 MVs for AMVP CUs and up to 13 MVs
for merge CUs are added to the candidate list.
[0168] 2.2.7.2 Sub-CU Level MV Candidate Set
[0169] The MV candidate set at sub-CU level consists of: [0170] (i)
an MV determined from a CU-level search, [0171] (ii) top, left,
top-left and top-right neighbouring MVs, [0172] (iii) scaled
versions of collocated MVs from reference pictures, [0173] (iv) up
to 4 ATMVP candidates, [0174] (v) up to 4 STMVP candidates
[0175] The scaled MVs from reference pictures are derived as
follows. All the reference pictures in both lists are traversed.
The MVs at a collocated position of the sub-CU in a reference
picture are scaled to the reference of the starting CU-level
MV.
[0176] ATMVP and STMVP candidates are limited to the four first
ones.
[0177] At the sub-CU level, up to 17 MVs are added to the candidate
list.
[0178] 2.2.7.3 Generation of Interpolated MV Field
[0179] Before coding a frame, interpolated motion field is
generated for the whole picture based on unilateral ME. Then the
motion field may be used later as CU level or sub-CU level MV
candidates.
[0180] First, the motion field of each reference pictures in both
reference lists is traversed at 4.times.4 block level. For each
4.times.4 block, if the motion associated to the block passing
through a 4.times.4 block in the current picture (As shown in FIG.
20) and the block has not been assigned any interpolated motion,
the motion of the reference block is scaled to the current picture
according to the temporal distance TD0 and TD1 (the same way as
that of MV scaling of TMVP in HEVC) and the scaled motion is
assigned to the block in the current frame. If no scaled MV is
assigned to a 4.times.4 block, the block's motion is marked as
unavailable in the interpolated motion field.
[0181] 2.2.7.4 Interpolation and Matching Cost
[0182] When a motion vector points to a fractional sample position,
motion compensated interpolation is needed. To reduce complexity,
bi-linear interpolation instead of regular 8-tap HEVC interpolation
is used for both bilateral matching and template matching.
[0183] The calculation of matching cost is a bit different at
different steps. When selecting the candidate from the candidate
set at the CU level, the matching cost is the absolute sum
difference (SAD) of bilateral matching or template matching. After
the starting MV is determined, the matching cost C of bilateral
matching at sub-CU level search is calculated as follows:
C=SAD+w(|MV.sub.x-MV.sub.x.sup.s|+|MV.sub.y-MV.sub.y.sup.s|)
[0184] where w is a weighting factor which is empirically set to 4,
MV and MV.sup.s indicate the current MV and the starting MV,
respectively. SAD is still used as the matching cost of template
matching at sub-CU level search.
[0185] In FRUC mode, MV is derived by using luma samples only. The
derived motion will be used for both luma and chroma for MC inter
prediction. After MV is decided, final MC is performed using 8-taps
interpolation filter for luma and 4-taps interpolation filter for
chroma.
[0186] 2.2.7.5 MV Refinement
[0187] MV refinement is a pattern based MV search with the
criterion of bilateral matching cost or template matching cost. In
the JEM, two search patterns are supported--an unrestricted
center-biased diamond search (UCBDS) and an adaptive cross search
for MV refinement at the CU level and sub-CU level, respectively.
For both CU and sub-CU level MV refinement, the MV is directly
searched at quarter luma sample MV accuracy, and this is followed
by one-eighth luma sample MV refinement. The search range of MV
refinement for the CU and sub-CU step are set equal to 8 luma
samples.
[0188] 2.2.7.6 Selection of Prediction Direction in Template
Matching FRUC Merge Mode
[0189] In the bilateral matching merge mode, bi-prediction is
always applied since the motion information of a CU is derived
based on the closest match between two blocks along the motion
trajectory of the current CU in two different reference pictures.
There is no such limitation for the template matching merge mode.
In the template matching merge mode, the encoder can choose among
uni-prediction from list0, uni-prediction from list1 or
bi-prediction for a CU. The selection is based on a template
matching cost as follows: [0190] If costBi<=factor*min(cost0,
cost1) [0191] bi-prediction is used; [0192] Otherwise, if
cost0<=cost1 [0193] uni-prediction from list0 is used; [0194]
Otherwise, [0195] uni-prediction from list1 is used;
[0196] where cost0 is the SAD of list0 template matching, cost1 is
the SAD of list1 template matching and costBi is the SAD of
bi-prediction template matching. The value of factor is equal to
1.25, which means that the selection process is biased toward
bi-prediction.
[0197] The inter prediction direction selection is only applied to
the CU-level template matching process.
[0198] 2.2.8 Generalized Bi-Prediction
[0199] In conventional bi-prediction, the predictors from L0 and L1
are averaged to generate the final predictor using the equal weight
0.5. The predictor generation formula is shown as in Equ. (4)
P.sub.TraditionalBiPred=(P.sub.L0+P.sub.L1+RoundingOffset)>>shiftN-
um, (4)
[0200] In Equ. (4), P.sub.TraditionalBiPred is the final predictor
for the conventional bi-prediction, P.sub.L0 and P.sub.L1 are
predictors from L0 and L1, respectively, and RoundingOffset and
shiftNum are used to normalize the final predictor.
[0201] Generalized Bi-prediction (GBI) in is proposed to allow
applying different weights to predictors from L0 and L1. The
predictor generation is shown in Equ. (5).
P.sub.GBi=((1-w.sub.1)*P.sub.L0+w.sub.1*P.sub.L1+RoundingOffset.sub.GBi)-
>>shiftNum.sub.GBi, (5)
[0202] In Equ. (5), P.sub.GBi is the final predictor of GBi.
(1-w.sub.1) and w.sub.1 are the selected GBI weights applied to the
predictors of L0 and L1, respectively. RoundingOffset.sub.GBi and
shiftNum.sub.GBi are used to normalize the final predictor in
GBi.
[0203] The supported weights of w.sub.1 is {-1/4, 3/8, 1/2, 5/8,
5/4}. One equal-weight set and four unequal-weight sets are
supported. For the equal-weight case, the process to generate the
final predictor is exactly the same as that in the conventional
bi-prediction mode. For the true bi-prediction cases in random
access (RA) condition, the number of candidate weight sets is
reduced to three.
[0204] For advanced motion vector prediction (AMVP) mode, the
weight selection in GBI is explicitly signaled at CU-level if this
CU is coded by bi-prediction. For merge mode, the weight selection
is inherited from the merge candidate. In this proposal, GBI
supports DMVR to generate the weighted average of template as well
as the final predictor for BMS-1.0.
[0205] 2.2.9 Multi-Hypothesis Inter Prediction
[0206] In the multi-hypothesis inter prediction mode, one or more
additional prediction signals are signaled, in addition to the
conventional uni/bi prediction signal. The resulting overall
prediction signal is obtained by sample-wise weighted
superposition. With the uni/bi prediction signal p.sub.uni/bi and
the first additional inter prediction signal/hypothesis h.sub.3,
the resulting prediction signal p.sub.3 is obtained as follows:
p.sub.3=(1-.alpha.)p.sub.uni/bi+.alpha.h.sub.3
[0207] The changes to the prediction unit syntax structure are
shown below:
TABLE-US-00001 prediction_unit( x0, y0, nPbW, nPbH ) { Descriptor
... if( ! cu_skip_flag[ x0 ][ y0 ] ) { i = 0 readMore = 1 while( i
< MaxNumAdditionalHypotheses && readMore ) {
additional_hypothesis_flag[ x0 ][ y0 ][ i ] ae(v) if(
additional_hypothesis_flag[ x0 ][ y0 ][ i ] ) { ref_idx_add_hyp[ x0
][ y0 ][ i ] ae(v) mvd_coding( x0, y0, 2+i ) mvp_add_hyp_flag[ x0
][ y0 ][ i ] ae(v) add_hyp_weight_idx[ x0 ][ y0 ][ i ] ae(v) }
readMore = additional_hypothesis_flag[ x0 ][ y0 ][ i ] i++ } }
}
[0208] The weighting factor .alpha. is specified by the syntax
element add_hyp_weight_idx, according to the following mapping:
TABLE-US-00002 add_hyp_weight_idx .alpha. 0 1/4 1 -1/8
[0209] Note that for the additional prediction signals, the concept
of prediction list0/list1 is abolished, and instead one combined
list is used. This combined list is generated by alternatingly
inserting reference frames from list0 and list1 with increasing
reference index, omitting reference frames which have already been
inserted, such that double entries are avoided.
[0210] Analogously to above, more than one additional prediction
signals can be used. The resulting overall prediction signal is
accumulated iteratively with each additional prediction signal.
p.sub.n+1=(1-.alpha..sub.n+1)p.sub.n+.alpha..sub.n+1h.sub.n+1
[0211] The resulting overall prediction signal is obtained as the
last p.sub.n (i.e., the p.sub.n having the largest index n).
[0212] Note that also for inter prediction blocks using MERGE mode
(but not SKIP mode), additional inter prediction signals can be
specified. Further note, that in case of MERGE, not only the uni/bi
prediction parameters, but also the additional prediction
parameters of the selected merging candidate can be used for the
current block.
[0213] Multi-hypothesis intra and inter prediction mode is also
known as Combined Inter and Intra Prediction (CIIP) mode.
[0214] 2.2.10 Multi-Hypothesis Prediction for Uni-Prediction of
AMVP Mode
[0215] In some example, when the multi-hypothesis prediction is
applied to improve uni-prediction of AMVP mode, one flag is
signaled to enable or disable multi-hypothesis prediction for
inter_dir equal to 1 or 2, where 1, 2, and 3 represent list 0, list
1, and bi-prediction, respectively. Moreover, one more merge index
is signaled when the flag is true. In this way, multi-hypothesis
prediction turns uni-prediction into bi-prediction, where one
motion is acquired using the original syntax elements in AMVP mode
while the other is acquired using the merge scheme. The final
prediction uses 1:1 weights to combine these two predictions as in
bi-prediction. The merge candidate list is first derived from merge
mode with sub-CU candidates (e.g., affine, alternative temporal
motion vector prediction (ATMVP)) excluded. Next, it is separated
into two individual lists, one for list 0 (L0) containing all L0
motions from the candidates, and the other for list 1 (L1)
containing all L1 motions. After removing redundancy and filling
vacancy, two merge lists are generated for L0 and L1 respectively.
There are two constraints when applying multi-hypothesis prediction
for improving AMVP mode. First, it is enabled for those CUs with
the luma coding block (CB) area larger than or equal to 64. Second,
it is only applied to L1 when in low delay B pictures.
[0216] 2.2.11 Multi-Hypothesis Prediction for Skip/Merge Mode
[0217] In examples, when the multi-hypothesis prediction is applied
to skip or merge mode, whether to enable multi-hypothesis
prediction is explicitly signaled. An extra merge indexed
prediction is selected in addition to the original one. Therefore,
each candidate of multi-hypothesis prediction implies a pair of
merge candidates, containing one for the 1.sup.st merge indexed
prediction and the other for the 2.sup.nd merge indexed prediction.
However, in each pair, the merge candidate for the 2.sup.nd merge
indexed prediction is implicitly derived as the succeeding merge
candidate (i.e., the already signaled merge index plus one) without
signaling any additional merge index. After removing redundancy by
excluding those pairs, containing similar merge candidates and
filling vacancy, the candidate list for multi-hypothesis prediction
is formed. Then, motions from a pair of two merge candidates are
acquired to generate the final prediction, where 5:3 weights are
applied to the 1.sup.st and 2.sup.nd merge indexed predictions,
respectively. Moreover, a merge or skip CU with multi-hypothesis
prediction enabled can save the motion information of the
additional hypotheses for reference of the following neighboring
CUs in addition to the motion information of the existing
hypotheses. Note that sub-CU candidates (e.g., affine, ATMVP) are
excluded from the candidate list, and for low delay B pictures,
multi-hypothesis prediction is not applied to skip mode. Moreover,
when multi-hypothesis prediction is applied to merge or skip mode,
for those CUs with CU width or CU height less than 16, or those CUs
with both CU width and CU height equal to 16, bi-linear
interpolation filter is used in motion compensation for multiple
hypotheses. Therefore, the worst-case bandwidth (required access
samples per sample) for each merge or skip CU with multi-hypothesis
prediction enabled is calculated in Table 1 and each number is less
than half of the worst-case bandwidth for each 4.times.4 CU with
multi-hypothesis prediction disabled.
[0218] 2.2.12 Ultimate Motion Vector Expression
[0219] In examples, ultimate motion vector expression (UMVE) is
presented. UMVE is used for either skip or merge modes with a
proposed motion vector expression method. Merge mode with Motion
Vector Difference (MMVD) mode is also known as Ultimate Motion
Vector Expression (UMVE) mode.
[0220] UMVE re-uses merge candidate as same as using in VVC. Among
the merge candidates, a candidate can be selected, and is further
expanded by the proposed motion vector expression method.
[0221] UMVE provides a new motion vector expression with simplified
signaling. The expression method includes starting point, motion
magnitude, and motion direction.
[0222] FIG. 21 shows an example of a UMVE Search Process
[0223] FIG. 22 shows an example of UMVE Search Points.
[0224] This proposed technique uses a merge candidate list as it
is. But only candidates which are default merge type
(MRG_TYPE_DEFAULT_N) are considered for UMVE's expansion.
[0225] Base candidate index defines the starting point. Base
candidate index indicates the best candidate among candidates in
the list as follows.
TABLE-US-00003 TABLE 1 Base candidate IDX Base candidate IDX 0 1 2
3 N.sup.th MVP 1.sup.st MVP 2.sup.nd MVP 3.sup.rd MVP 4.sup.th
MVP
[0226] If the number of base candidate is equal to 1, Base
candidate IDX is not signaled.
[0227] Distance index is motion magnitude information. Distance
index indicates the pre-defined distance from the starting point
information. Pre-defined distance is as follows:
TABLE-US-00004 TABLE 2 Distance IDX Distance IDX 0 1 2 3 4 5 6 7
Pixel 1/4- 1/2- 1-pel 2-pel 4-pel 8-pel 16-pel 32-pel distance pel
pel
[0228] Direction index represents the direction of the MVD relative
to the starting point. The direction index can represent of the
four directions as shown below.
TABLE-US-00005 TABLE 3 Direction IDX Direction IDX 00 01 10 11
x-axis + - N/A N/A y-axis N/A N/A + -
[0229] UMVE flag is signaled right after sending a skip flag and
merge flag. If skip and merge flag is true, UMVE flag is parsed. If
UMVE flag is equal to 1, UMVE syntaxes are parsed. But, if not 1,
AFFINE flag is parsed. If AFFINE flag is equal to 1, that is AFFINE
mode, But, if not 1, skip/merge index is parsed for VTM's
skip/merge mode.
[0230] Additional line buffer due to UMVE candidates is not needed.
Because a skip/merge candidate of software is directly used as a
base candidate. Using input UMVE index, the supplement of MV is
decided right before motion compensation. There is no need to hold
long line buffer for this.
[0231] 2.2.13 Affine Merge Mode with Prediction Offsets
[0232] In examples, UMVE is extended to affine merge mode, we will
call this UMVE affine mode thereafter. The proposed method selects
the first available affine merge candidate as a base predictor.
Then it applies a motion vector offset to each control point's
motion vector value from the base predictor. If there's no affine
merge candidate available, this proposed method will not be
used.
[0233] The selected base predictor's inter prediction direction,
and the reference index of each direction is used without
change.
[0234] In the current implementation, the current block's affine
model is assumed to be a 4-parameter model, only 2 control points
need to be derived. Thus, only the first 2 control points of the
base predictor will be used as control point predictors.
[0235] For each control point, a zero_MVD flag is used to indicate
whether the control point of current block has the same MV value as
the corresponding control point predictor. If zero_MVD flag is
true, there's no other signaling needed for the control point.
Otherwise, a distance index and an offset direction index is
signaled for the control point.
[0236] A distance offset table with size of 5 is used as shown in
the table below. Distance index is signaled to indicate which
distance offset to use. The mapping of distance index and distance
offset values is shown in FIG. 23.
TABLE-US-00006 TABLE 1 Distance offset table Distance IDX 0 1 2 3 4
Distance-offset 1/2-pel 1-pel 2-pel 4-pel 8-pel
[0237] The direction index can represent four directions as shown
below, where only x or y direction may have an MV difference, but
not in both directions.
TABLE-US-00007 Offset Direction IDX 00 01 10 11 x-dir-factor +1 -1
0 0 y-dir-factor 0 0 +1 -1
[0238] If the inter prediction is uni-directional, the signaled
distance offset is applied on the offset direction for each control
point predictor. Results will be the MV value of each control
point.
[0239] For example, when base predictor is uni-directional, and the
motion vector values of a control point is MVP (v.sub.px,
v.sub.py). When distance offset and direction index are signaled,
the motion vectors of current block's corresponding control points
will be calculated as below.
MV(v.sub.x,v.sub.y)=MVP(v.sub.px,v.sub.py)+MV(x-dir-factor*distance-offs-
et,y-dir-factor*distance-offset);
[0240] If the inter prediction is bi-directional, the signaled
distance offset is applied on the signaled offset direction for
control point predictor's L0 motion vector; and the same distance
offset with opposite direction is applied for control point
predictor's L1 motion vector. Results will be the MV values of each
control point, on each inter prediction direction.
[0241] For example, when base predictor is uni-directional, and the
motion vector values of a control point on L0 is MVP.sub.L0
(v.sub.0px, v.sub.0py), and the motion vector of that control point
on L1 is MVP.sub.L1 (v.sub.1px, v.sub.1py). When distance offset
and direction index are signaled, the motion vectors of current
block's corresponding control points will be calculated as
below.
MV.sub.L0(v.sub.0x,v.sub.0y)=MVP.sub.L0(v.sub.0px,v.sub.0py)+MV(x-dir-fa-
ctor*distance-offset,y-dir-factor*distance-offset);
MV.sub.L1(v.sub.0x,v.sub.0y)=MVP.sub.L1(v.sub.1px,v.sub.1py)+MV(-x-dir-f-
actor*distance-offset,-y-dir-factor*distance-offset);
[0242] 2.2.14 Bi-Directional Optical Flow
[0243] In BIO, motion compensation is first performed to generate
the first predictions (in each prediction direction) of the current
block. The first predictions are used to derive the spatial
gradient, the temporal gradient and the optical flow of each
subblock/pixel within the block, which are then used to generate
the second prediction, i.e., the final prediction of the
subblock/pixel. The details are described as follows.
[0244] Bi-directional Optical flow (BIO) is sample-wise motion
refinement which is performed on top of block-wise motion
compensation for bi-prediction. The sample-level motion refinement
doesn't use signalling.
[0245] FIG. 24 shows an example of an optical flow trajectory
[0246] Let I.sup.(k) be the luma value from reference k (k=0, 1)
after block motion compensation, and
.differential.I.sup.(k)/.differential.x,
.differential.I.sup.(k)/.differential.y are horizontal and vertical
components of the I.sup.(k) gradient, respectively. Assuming the
optical flow is valid, the motion vector field (v.sup.x, v.sub.y)
is given by an equation as follows:
.differential.I.sup.(k)/.differential.t+v.sub.x.differential.I.sup.(k)/.-
differential.x+v.sub.y.differential.I.sup.(k)/.differential.y=0.
[0247] Combining this optical flow equation with Hermite
interpolation for the motion trajectory of each sample results in a
unique third-order polynomial that matches both the function values
I.sup.(k) and derivatives .differential.I.sup.(k)/.differential.x,
.differential.I.sup.(k)/.differential.y at the ends. The value of
this polynomial at t=0 is the BIO prediction:
pred.sub.BIO=1/2(I.sup.(0)+I.sup.(1)+v.sub.x/2(.tau..sub.1.differential.-
I.sup.(1)/.differential.x-.tau..sub.0.differential.I.sup.(0)/.differential-
.x)+v.sub.y/2(.tau..sub.1.differential.I.sup.(1)/.differential.y-.tau..sub-
.0.differential.I.sup.(0)/.differential.y)). (4)
[0248] Here, .tau..sub.0 and .tau..sub.1 denote the distances to
the reference frames as shown on a FIG. 24. Distances .tau..sub.0
and .tau..sub.1 are calculated based on POC for Ref0 and Ref1:
.tau..sub.0=POC(current)-POC(Ref0),
.tau..sub.1=POC(Ref1)-POC(current). If both predictions come from
the same time direction (either both from the past or both from the
future) then the signs are different (i.e.,
.tau..sub.0.tau..sub.1<0). In this case, BIO is applied only if
the prediction is not from the same time moment (i.e.,
.tau..sub.0.noteq..tau..sub.1), both referenced regions have
non-zero motion (MVx.sub.0, MVy.sub.0, MVx.sub.1,
MVy.sub.1.noteq.0) and the block motion vectors are proportional to
the time distance
(MVx.sub.0/MVx.sub.1=MVy.sub.0/MVy.sub.1=-.tau..sub.0/.tau..sub.1).
[0249] The motion vector field (v.sub.x, v.sub.y) is determined by
minimizing the difference .DELTA. between values in points A and B
(intersection of motion trajectory and reference frame planes on
FIG. 24). Model uses only first linear term of a local Taylor
expansion for .DELTA.:
.DELTA.=(I.sup.(0)-I.sup.(1).sub.0+v.sub.x(.tau..sub.1.differential.I.su-
p.(1)/.differential.x+.tau..sub.0.differential.I.sup.(0)/.differential.x)+-
v.sub.y(.tau..sub.1.differential.I.sup.(1)/.differential.y+.tau..sub.0.dif-
ferential.I.sup.(0)/.differential.y)) (5)
[0250] All values in Equation 5 depend on the sample location (i',
j'), which was omitted from the notation so far. Assuming the
motion is consistent in the local surrounding area, we minimize
.DELTA. inside the (2M+1).times.(2M+1) square window .OMEGA.
centered on the currently predicted point (i, j), where M is equal
to 2:
( v x , v y ) = arg .times. .times. min v x , v y .times. [ i ' , j
] .di-elect cons. .OMEGA. .times. .DELTA. 2 .function. [ i ' , j '
] ( 6 ) ##EQU00004##
[0251] For this optimization problem, the JEM uses a simplified
approach making first a minimization in the vertical direction and
then in the horizontal direction. This results in
.times. v x = ( s 1 + r ) > m ? .times. clip .times. .times. 3
.times. ( - thBIO , thBIO , - s 3 ( s 1 + r ) ) .times. : .times.
.times. 0 ( 7 ) .times. v y = ( s 5 + r ) > m ? .times. clip
.times. .times. 3 .times. ( - thBIO , thBIO , - s 6 - v x .times. s
2 / 2 ( s 5 + r ) ) .times. : .times. .times. 0 .times. .times.
.times. .times. where , ( 8 ) .times. s 1 = [ i ' , j ] .di-elect
cons. .OMEGA. .times. ( .tau. 1 .times. .differential. I ( 1 ) /
.differential. x .times. + .tau. 0 .times. .differential. I ( 0 ) /
.differential. x ) 2 ; ( 9 ) .times. s 3 = [ i ' , j ] .di-elect
cons. .OMEGA. .times. ( I ( 1 ) - I ( 0 ) ) .times. ( .tau. 1
.times. .differential. I ( 1 ) / .differential. x .times. + .tau. 0
.times. .differential. I ( 0 ) / .differential. x ) ; s 2 = [ i ' ,
j ] .di-elect cons. .OMEGA. .times. ( .tau. 1 .times.
.differential. I ( 1 ) / .differential. x .times. + .tau. 0 .times.
.differential. I ( 0 ) / .differential. x ) .times. ( .tau. 1
.times. .differential. I ( 1 ) / .differential. y + .tau. 0 .times.
.differential. I ( 0 ) / .differential. y ) ; .times. s 5 = [ i ' ,
j ] .di-elect cons. .OMEGA. .times. ( .tau. 1 .times.
.differential. I ( 1 ) / .differential. y + .tau. 0 .times.
.differential. I ( 0 ) / .differential. y ) 2 ; .times. s 6 = [ i '
, j ] .di-elect cons. .OMEGA. .times. ( I ( 1 ) - I ( 0 ) ) .times.
( .tau. 1 .times. .differential. I ( 1 ) / .differential. y + .tau.
0 .times. .differential. I ( 0 ) / .differential. y )
##EQU00005##
[0252] In order to avoid division by zero or a very small value,
regularization parameters r and m are introduced in Equations 10
and 11.
r=5004.sup.d-8 (10)
m=7004.sup.d-8 (11)
[0253] Here d is bit depth of the video samples.
[0254] In order to keep the memory access for BIO the same as for
regular bi-predictive motion compensation, all prediction and
gradients values, I.sup.(k),
.differential.I.sup.(k)/.differential.x,
.differential.I.sup.(k)/.differential.y, are calculated only for
positions inside the current block. In Equation 9,
(2M+1).times.(2M+1) square window .OMEGA. centered in currently
predicted point on a boundary of predicted block needs to accesses
positions outside of the block (as shown in FIG. 25A). In the JEM,
values of I.sup.(k), .differential.I.sup.(k)/.differential.x,
.differential.I.sup.(k)/.differential.y outside of the block are
set to be equal to the nearest available value inside the block.
For example, this can be implemented as padding, as shown in FIG.
25B.
[0255] FIG. 26 shows BIO w/o block extension: a) access positions
outside of the block; b) padding is used in order to avoid extra
memory access and calculation.
[0256] With BIO, it's possible that the motion field can be refined
for each sample. To reduce the computational complexity, a
block-based design of BIO is used in the JEM. The motion refinement
is calculated based on 4.times.4 block. In the block-based BIO, the
values of s.sub.n in Equation 9 of all samples in a 4.times.4 block
are aggregated, and then the aggregated values of s.sub.n in are
used to derived BIO motion vectors offset for the 4.times.4 block.
More specifically, the following formula is used for block-based
BIO derivation:
.times. s 1 , b k = ( x , y ) .di-elect cons. b k .times. [ i ' , j
] .di-elect cons. .OMEGA. .function. ( x , y ) .times. ( .tau. 1
.times. .differential. I ( 1 ) / .differential. x + .tau. 0 .times.
.differential. I ( 0 ) / .differential. x ) 2 ; .times. .times.
.times. s 3 , b k = ( x , y ) .di-elect cons. b k .times. [ i ' , j
] .di-elect cons. .OMEGA. .times. ( I ( 1 ) - I ( 0 ) ) .times. (
.tau. 1 .times. .differential. I ( 1 ) / .differential. x + .tau. 0
.times. .differential. I ( 0 ) / .differential. X } ; .times.
.times. s 2 , b k = ( x , y ) .di-elect cons. b k .times. [ i ' , j
] .di-elect cons. .OMEGA. .times. ( .tau. 1 .times. .differential.
I ( 1 ) / .differential. x + .tau. 0 .times. .differential. I ( 0 )
/ .differential. x ) .times. ( .tau. 1 .times. .differential. I ( 1
) / .differential. y + .tau. 0 .times. .differential. I ( 0 ) /
.differential. y ) ; .times. .times. .times. s 5 , b k = ( x , y )
.di-elect cons. b k .times. [ i ' , j ] .di-elect cons. .OMEGA.
.times. ( .tau. 1 .times. .differential. I ( 1 ) / .differential. y
+ .tau. 0 .times. .differential. I ( 0 ) / .differential. y ) 2 ;
.times. .times. .times. s 6 , b k = ( x , y ) .di-elect cons. b k
.times. [ i ' , j ] .di-elect cons. .OMEGA. .times. ( I ( 1 ) - I (
0 ) ) .times. ( .tau. 1 .times. .differential. I ( 1 ) /
.differential. y + .tau. 0 .times. .differential. I ( 0 ) /
.differential. y ) ( 12 ) ##EQU00006##
where b.sub.k denotes the set of samples belonging to the k-th
4.times.4 block of the predicted block. s.sub.n in Equations 7 and
8 are replaced by ((s.sub.n,bk)>>4) to derive the associated
motion vector offsets.
[0257] In some cases, MV regiment of BIO might be unreliable due to
noise or irregular motion. Therefore, in BIO, the magnitude of MV
regiment is clipped to a threshold value thBIO. The threshold value
is determined based on whether the reference pictures of the
current picture are all from one direction. If all the reference
pictures of the current picture are from one direction, the value
of the threshold is set to 12.times.2.sup.14-d; otherwise, it is
set to 12.times.2.sup.13-d.
[0258] Gradients for BIO are calculated at the same time with
motion compensation interpolation using operations consistent with
HEVC motion compensation process (2D separable FIR). The input for
this 2D separable FIR is the same reference frame sample as for
motion compensation process and fractional position (fracX, fracY)
according to the fractional part of block motion vector. In case of
horizontal gradient .differential.I/.differential.x signal first
interpolated vertically using BIOfilterS corresponding to the
fractional position fracY with de-scaling shift d-8, then gradient
filter BIOfilterG is applied in horizontal direction corresponding
to the fractional position fracX with de-scaling shift by 18-d. In
case of vertical gradient .differential.I/.differential.y first
gradient filter is applied vertically using BIOfilterG
corresponding to the fractional position fracY with de-scaling
shift d-8, then signal displacement is performed using BIOfilterS
in horizontal direction corresponding to the fractional position
fracX with de-scaling shift by 18-d. The length of interpolation
filter for gradients calculation BIOfilterG and signal displacement
BIOfilterF is shorter (6-tap) in order to maintain reasonable
complexity. Table 2 shows the filters used for gradients
calculation for different fractional positions of block motion
vector in BIO. Table 3 shows the interpolation filters used for
prediction signal generation in BIO.
TABLE-US-00008 TABLE 2 Filters for gradients calculation in BIO
Fractional pel position Interpolation filter for
gradient(BIOfilterG) 0 {8, -39, -3, 46, -17, 5} 1/16 {8, -32, -13,
50, -18, 5} 1/8 {7, -27, -20, 54, -19, 5} 3/16 {6, -21, -29, 57,
-18, 5} 1/4 {4, -17, -36, 60, -15, 4} 5/16 {3, -9, -44, 61, -15, 4}
3/8 {1, -4, -48, 61, -13, 3} 7/16 {0, 1, -54, 60, -9, 2} 1/2 {-1,
4, -57, 57, -4, 1}
TABLE-US-00009 TABLE 3 Interpolation filters for prediction signal
generation in BIO Fractional pel position Interpolation filter for
prediction signal(BIOfilterS) 0 {0, 0, 64, 0, 0, 0} 1/16 {1, -3,
64, 4, -2, 0} 1/8 {1, -6, 62, 9, -3, 1} 3/16 {2, -8, 60, 14, -5, 1}
1/4 {2, -9, 57, 19, -7, 2} 5/16 {3, -10, 53, 24, -8, 2} 3/8 {3,
-11, 50, 29, -9, 2} 7/16 {3, -11, 44, 35, -10, 3} 1/2 {3, -10, 35,
44, -11, 3}
[0259] In the JEM, BIO is applied to all bi-predicted blocks when
the two predictions are from different reference pictures. When LIC
is enabled for a CU, BIO is disabled.
[0260] In the JEM, OBMC is applied for a block after normal MC
process. To reduce the computational complexity, BIO is not applied
during the OBMC process. This means that BIO is only applied in the
MC process for a block when using its own MV and is not applied in
the MC process when the MV of a neighboring block is used during
the OBMC process.
[0261] It is proposed that before calculating the temporal gradient
in BIO, a reference block (or a prediction block) may be modified
firstly, and the calculation of temporal gradient is based on the
modified reference block. In one example, mean is removed for all
reference blocks. In one example, mean is defined as the average of
selected samples in the reference block. In one example, all pixels
in a reference block X or a sub-block of the reference block X are
used to calculate MeanX. In one example, only partial pixels in a
reference block X or a sub-block of the reference block are used to
calculate MeanX. For example, only pixels in every second
row/column are used.
[0262] 2.2.15 Decoder-Side Motion Vector Refinement
[0263] In bi-prediction operation, for the prediction of one block
region, two prediction blocks, formed using a motion vector (MV) of
list0 and a MV of list1, respectively, are combined to form a
single prediction signal. In the decoder-side motion vector
refinement (DMVR) method, the two motion vectors of the
bi-prediction are further refined by a bilateral template matching
process. The bilateral template matching applied in the decoder to
perform a distortion-based search between a bilateral template and
the reconstruction samples in the reference pictures in order to
obtain a refined MV without transmission of additional motion
information.
[0264] In DMVR, a bilateral template is generated as the weighted
combination (i.e. average) of the two prediction blocks, from the
initial MV0 of list0 and MV1 of list1, respectively, as shown in
FIG. 26. The template matching operation consists of calculating
cost measures between the generated template and the sample region
(around the initial prediction block) in the reference picture. For
each of the two reference pictures, the MV that yields the minimum
template cost is considered as the updated MV of that list to
replace the original one. In the JEM, nine MV candidates are
searched for each list. The nine MV candidates include the original
MV and 8 surrounding MVs with one luma sample offset to the
original MV in either the horizontal or vertical direction, or
both. Finally, the two new MVs, i.e., MV0' and MV1' as shown in
FIG. 26, are used for generating the final bi-prediction results. A
sum of absolute differences (SAD) is used as the cost measure.
Please note that when calculating the cost of a prediction block
generated by one surrounding MV, the rounded MV (to integer pel) is
actually used to obtain the prediction block instead of the real
MV.
[0265] DMVR is applied for the merge mode of bi-prediction with one
MV from a reference picture in the past and another from a
reference picture in the future, without the transmission of
additional syntax elements. In the JEM, when LIC, affine motion,
FRUC, or sub-CU merge candidate is enabled for a CU, DMVR is not
applied.
[0266] FIG. 26 shows an example of a DMVR based on bilateral
template matching
[0267] 2.3 Related Method
[0268] For motion refinement and coding in video coding, a MV
update method and a two-step inter prediction method is proposed.
The derived MV between reference block 0 and reference block 1 in
BIO are scaled and added to the original motion vector of list 0
and list 1. Meanwhile, the updated MV is used to perform motion
compensation and a second inter prediction is generated as the
final prediction.
[0269] Meanwhile, the temporal gradient is modified by removing the
mean difference between reference block 0 and reference block
1.
[0270] In methods for sub-block based prediction in video coding,
for methods for sub-block based prediction in video coding, for
several sub-blocks with different, only one set of MVs is generate
for chroma component.
3. Relationship to Other Technologies
[0271] Sub-block based prediction method is proposed. First, we
propose to divide the current block into sub-blocks in different
ways depending on the color component and the color format (such as
4:2:0 or 4:2:2). Second, we propose that the MV of a sub-block of
one color component can be derived from the MV(s) of one or more
sub-blocks of another color component, which has (have) already
been derived. Third, we propose to unify the constrain for merge
affine mode and non-merge affine mode.
[0272] For example, if an 8.times.8 CU is split into four 4.times.4
sub-blocks and each of the sub-block has its own motion vectors,
then, we calculate average of the four motion vectors and use it
(scaled by 2) as the motion vectors of the chroma component in
YCbCr 4:2:0 case. In this way, motion compensation of the Cb/Cr
component is performed for a 4.times.4 block instead of four
2.times.2 blocks and the memory bandwidth can be saved.
[0273] Interweaved prediction is proposed for sub-block motion
compensation. With interweaved prediction, a block is divided into
sub-blocks with more than one dividing patterns. A dividing pattern
is defined as the way to divide a block into sub-blocks, including
the size of sub-blocks and the position of sub-blocks. For each
dividing pattern, a corresponding prediction block may be generated
by deriving motion information of each sub-block based on the
dividing pattern. Therefore, even for one prediction direction,
multiple prediction blocks may be generated by multiple dividing
patterns. Alternatively, for each prediction direction, only a
dividing pattern may be applied.
[0274] Suppose there are X dividing patterns, and X prediction
blocks of the current block, denoted as P.sub.0, P.sub.1, . . . ,
P.sub.X-1 are generated by sub-block based prediction with the X
dividing patterns. The final prediction of the current block,
denoted as P, can be generated as:
P .function. ( x , y ) = i = 0 X - 1 .times. w i .function. ( x , y
) .times. P i .function. ( x , y ) i = 0 X - 1 .times. w i
.function. ( x , y ) ( 15 ) ##EQU00007##
[0275] where (x, y) is the coordinate of a pixel in the block and
w.sub.i(x, y) is the weighting value of P.sub.i. Without losing
generalization, it is supposed that
.SIGMA..sub.i=0.sup.X-1w.sub.i(x, y)=(1<<N) wherein N is a
non-negative value. FIG. 27 shows an example of interweaved
prediction with two dividing patterns.
4. Examples of Problems Solved My Embodiments that Use the Present
Techniques
[0276] A two-step inter prediction method is proposed, however,
such method can be performed multiple times to get more accurate
motion information such that higher coding gains may be
expected.
[0277] In DMVR, there is no two-step inter prediction method.
5. Example Embodiments Presented in an Itemized Format
[0278] To solve the problems mentioned above, it is proposed to
refine motion information (e.g., motion vectors) more than once for
those coding tools which need to perform decoder-side motion
refinement process (e.g., DMVR), and/or for those coding tools
which relies on some intermediate motion information different from
the final motion information used for motion compensation (e.g.,
BIO).
[0279] It is also proposed that even for coding tools which don't
apply motion information refinement at the decoder side, motion
information of a block/a sub-block within a coded block may be
refined once or multiple times and the refined motion information
may be used for motion vector prediction of blocks to be coded
afterwards, and/or filtering process.
[0280] Hereinafter, DMVD is used to represent DMVR or BIO or other
decoder-sider motion vector refinement method or pixel refinement
method.
[0281] Denote SATD as sum of absolute transformed differences,
MRSATD as mean removed sum of absolute transformed differences, and
SSE as sum of squares error, and MRSSE as mean removed sum of
squares error.
[0282] The detailed items below should be considered as examples to
explain general concepts. These inventions should not be
interpreted in a narrow way. Furthermore, these items described
coding/decoding techniques that can be combined in any manner.
[0283] 1. It is proposed that the motion vector refinement process
may be performed multiple times, e.g., performed N times wherein N
is a non-negative integer number. Suppose the signaled MVs are
(MVLX0_x, MVLX0_y) and the i-th refined MVs are (MVLXi_x, MVLXi_y),
wherein LX=L0 or L1 and i=1, . . . , N. [0284] a) In one example,
the (i-1)th refined motion vectors (i.e., motion vectors after the
(i-1)th iteration, and when (i-1) equals to 0, the signaled motion
vectors are used) may be used to generate the i-th motion
compensated reference blocks of the PU/CU/block/sub-block.
Alternatively, furthermore, the i-th motion compensated reference
blocks may be further used to derive the i-th temporal gradients,
spatial gradients and refined motion vectors. An example is shown
in FIG. 28. [0285] b) Different interpolation filters from those
used for inter-coded blocks which are not coded with proposed
method may be used for motion compensation in different motion
vector refinement steps to reduce the memory bandwidth. For
example, short-tap filters are used in 1st.about.(N-1)th steps.
[0286] c) Intermediate refined MVs from the i-th iteration may be
firstly modified before being used to generate reference blocks. In
one example, fractional MVs are rounded to integer MVs and then are
used to generate reference blocks in some steps, for example, in
1st.about.(N-1)th steps. [0287] d) In some or all iterations, one
block may be first split into several sub-blocks and each sub-block
is treated in the same way as a normal coding block with size equal
to the sub-block size. [0288] i. In one example, a block is firstly
split into multiple sub-blocks, and each block's motion information
may be refined multiple times. [0289] ii. Alternatively, a block is
firstly split into multiple sub-blocks, and only for partial of the
sub-blocks, their motion information may be refined multiple times.
[0290] iii. Alternatively, a block is firstly split into multiple
sub-blocks, different sub-block's motion information may be refined
with different numbers of iterations (e.g., for some sub-block, no
refinement is applied, and for some, motion information may be
refined with multiple times.) [0291] iv. Alternatively, the motion
information of the whole block is refined N-1 times and afterwards,
based on the (N-1)th refined motion information, the block is split
to multiple sub-blocks, and for each sub-block, its motion
information may be further refined. [0292] e) In different steps,
the refined MVs may be derived at different sub-block size. [0293]
f) In one example, the refined motion vectors in the Nth step is
used to perform the motion compensation and then the method
described in previous section[00213] is used to generate the final
prediction of the CU/sub-block. [0294] g) In one example,
predictions are generated for a block/sub-block in each step (or
some steps) and they are weighted averaged to generate the final
predictions of the block/sub-block. [0295] h) In one example, MVs
derived in each step may be further constrained. [0296] i. For
example, |MVLXi_x-MVLX0_x|<=Tx and |MVLXi_y-MVLX0_y|<=Ty, for
all 1<=i<=N. [0297] ii. For example,
Max{MVLXi_x-MVLXj_x}<=Tx and Max{MVLXi_y-MVLXj_y}<=Ty, for
all 1<=i, j<=N. [0298] iii. The thresholds Tx and Ty can be
equal or not. They can be predefined numbers or signaled from the
encoder to the decoder in VPS/SPS/PPS/slice header/tile group
header/tile/CTU/CU. [0299] i) The motion vector refinement process
may be terminated after the Kth step for a block/sub-block, if the
refined MVs after the Kth step and the input MVs in the Kth step
are similar. [0300] i. For example, if the absolute difference
between the vertical or/and horizontal component of the refined MV
and the input MV (in any prediction direction) is not larger than T
quarter-pel distance, wherein T=1/4, 1/3, 1/2, 1, 2, 3, 4, . . .
etc., the motion vector refinement process is terminated. [0301]
ii. For example, if sum of the absolute difference between the
vertical and horizontal component of the refined MV and the input
MV (in any prediction direction) is not larger than T quarter-pel
distance, the motion vector refinement process is terminated.
[0302] j) The iterative number N may be adaptive [0303] i. For
example, N depends on the current block size. [0304] 1. For
example, N is larger for a larger block and vice versa. [0305] ii.
For example, N depends on the coding mode of the current block.
[0306] iii. For example, N depends on MVD (Motion Vector
Difference) of the current block. [0307] 1. For example, N is
larger when |MVD| is larger. [0308] iv. For example, N depends on
QP [0309] 1. For example, N is larger when QP is larger. [0310] v.
N may be signaled from the encoder to the decoder in
VPS/SPS/PPS/picture header/slice header/tile group
header/tile/CTU/CU. [0311] 2. With the refined motion vectors in
bullet 1, the two-step inter-prediction process may be extended to
multiple-step inter-prediction, wherein the finally refined motion
vectors after N iterations is used to perform the final motion
compensation and generate the final prediction of a
block/sub-block. [0312] a) Alternatively, predictions are generated
for a block/sub-block in each step (or some steps) and the final
predictions of the block/sub-block may be generated by those
predictions. In one example, they are weighted averaged to generate
the final predictions of the block/sub-block. [0313] 3. It is
proposed that the temporal gradient modification process can be
performed for each M1.times.N1 sub-block though the BIO process may
be performed for each M2.times.N2 sub-block. [0314] a) In one
example, refined motion vectors are derived for each 4.times.4
block while the temporal gradient modification process is performed
for each 8.times.8 block. That is, M1=N1=8, M2=N2=4. [0315] b) In
one example, refined motion vectors are derived for each 8.times.8
block while the temporal gradient modification process is performed
for each 4.times.4 block. [0316] c) In one example, refined motion
vectors are derived for each 4.times.4 block while the temporal
gradient modification process is performed for each 4.times.4
block. [0317] d) In one example, refined motion vectors are derived
for each 8.times.8 block while the temporal gradient modification
process is performed for each 8.times.8 block. [0318] e) M1, N1,
M2, N2 may be pre-defined or depend on the block size/coded
modes/signaled in VPS/SPS/PPS/picture header/tile groups/etc. al.
[0319] 4. It is proposed to only use partial pixels of a
block/sub-block for calculating the temporal/spatial gradients,
which may be used for deriving the motion vector of the
block/sub-block. [0320] a) In one example, temporal and spatial
gradients are calculated for every N rows or/and columns. For
example, N=2. [0321] b) In one example, temporal and spatial
gradients are calculated for the
top-left/top-right/bottom-left/bottom-right quarter of the
CU/sub-block. [0322] c) In one example, temporal and spatial
gradients are calculated for every N rows or/and columns the
top-left/top-right/bottom-left/bottom-right quarter of the
CU/sub-block. [0323] d) Such methods may be enabled for the
two-step inter-prediction or multiple-step inter-prediction in
bullet 2, wherein temporal/spatial gradients may only be used to
derive refined motion vectors of a block/sub-block, and is not
directly used to refine prediction of the block/sub-block. [0324]
5. It is proposed that the motion vector refinement process in DMVR
may be performed multiple times. [0325] a) In one example, the
(i-1)th refined motion vectors (i.e., motion vectors after the
(i-1)th iteration, and when (i-1) equals to 0, the signaled motion
vectors are used) can be used as the start searching point in the
i-th motion vector refinement process, i=1, . . . , N, wherein N is
a non-negative integer number. [0326] b) Different interpolation
filters from those used for inter-coded blocks which are not coded
with proposed method may be used in different motion vector
refinement steps to reduce the memory bandwidth. For example,
short-tap filters are used in 1st.about.(N-1)th steps. [0327] c) In
one example, fractional MVs are rounded to integer MVs and are then
used as the start searching point in some steps, for example, in
1st.about.(N-1)th step. [0328] 6. It is proposed that the refined
motion vectors derived in BIO or DMVR or other decoder side motion
refinement technologies may be only used for the final motion
compensation of some components. [0329] a) In one example, the
refined motion vectors are only used for the final motion
compensation of Cb or/and Cr component. [0330] b) In one example,
the refined motion vectors are only used for the final motion
compensation of luma component. [0331] c) In one example, in BIO,
the refined motion vectors are used to perform motion compensation
and generate the final prediction of chroma components, and the
method described in [00185] is used to generate the final
prediction of luma component. [0332] i. For example, the motion
vector is refined only once and is used for motion compensation of
the chroma component, and the method described in previous section
(section 2.2.14)[00185] is used to generate the final prediction of
luma component. [0333] d) In one example, in BIO, the method
described in [00185](section 2.2.14) is used to generate the final
prediction of both luma and chroma components. [0334] e) In one
example, in BIO and DMVR, the refined motion vectors are used to
perform motion compensation and generate the final prediction of
both luma and chroma components. [0335] 7. Methods for sub-block
based prediction in Video Coding can be used for motion
compensation of chroma component to reduce memory bandwidth. For
example, four neighboring 4.times.4 blocks are grouped together,
and only one set of motion vector is derived for the chroma
component (in YCbCr 4:2:0 case) and is used to perform motion
compensation of a 4.times.4 chroma block. [0336] 8. It is proposed
that BIO or/and DMVR and/or or other decoder side motion refinement
technologies may be performed at sub-block level. [0337] a)
Alternatively, furthermore, Interweaved Prediction in Video Coding
can be used to derive different motion vectors for different
dividing patterns, and the final prediction is generated based on
the prediction value of all dividing patterns. [0338] 9. The
proposed methods may be applied under certain conditions, such as
based on block sizes, encoded mode information, motion information,
slice/picture/tile types, etc. al. [0339] a) In one example, when a
block size contains less than M*H samples, e.g., 16 or 32 or 64
luma samples, the above methods are not allowed. [0340] b) In one
example, when a block size contains more than M*H samples, e.g., 16
or 32 or 64 luma samples, the above methods are not allowed. [0341]
c) Alternatively, when minimum size of a block's width or/and
height is smaller than or no larger than X, the above methods are
not allowed. In one example, X is set to 8. [0342] d)
Alternatively, when a block's width >th1 or >=th1 and/or a
block's height >th2 or >=th2, the above methods are not
allowed. In one example, X is set to 64. [0343] i. For example, the
above methods are disabled for M.times.M (e.g., 128.times.128)
block. [0344] ii. For example, the above methods are disabled for
N.times.M/M.times.N block, e.g., wherein N>=64, M=128. [0345]
iii. For example, the above methods are disabled for
N.times.M/M.times.N block, e.g., wherein N>=4, M=128. [0346] e)
Alternatively, when a block's width <th1 or <=th1 and/or a
block's height <th2 or <=th2, the above methods are not
allowed. In one example, th1 and/or th2 is set to 8. [0347] f) In
one example, in BIO, the above methods are disabled for blocks
coded in AMVP mode. [0348] g) In one example, in BIO or DMVR, the
above methods are disabled for blocks coded in skip mode. [0349]
10. For sub-block based methods (e.g., Affine, ATMVP, BIO, DMVR,
etc.), maximum number of sub-blocks may be fixed for all kinds of
CU/PU sizes. Suppose there are will be K.times.L sub-blocks and one
block size is denoted by M.times.N. [0350] a) In one example, the
width of a sub-block is set to max(TH.sub.w, M/K). [0351] b) In one
example, the height of a sub-block is set to max(TH.sub.h, N/L).
[0352] c) TH.sub.w and/or TH.sub.h may be pre-defined (e.g., 4) or
signaled in SPS/PPS/picture/slice/tile group/tile level/group of
CTUs/CTU row/CTU/CU/PU. [0353] d) TH.sub.w and/or TH.sub.h may be
dependent on whether current block is bi-prediction or
uni-prediction. In one example, TH.sub.w and/or TH.sub.h may be set
to 4 for uni-prediction or 8 for bi-prediction. [0354] 11. For
sub-block based methods (e.g., Affine, ATMVP, BIO, DMVR, etc. al),
whether and how to split the block into sub-blocks may be different
for different color components. [0355] a) In one example, whether
and how to split a chroma block depend on the width and height of
the chroma block, independently of the whether and how to split its
corresponding luma block. [0356] b) In one example, whether and how
to split a chroma block depend on the width and height of the
chroma block, independently of the whether and how to split its
corresponding luma block. [0357] c) In one example, whether and how
to split a chroma block depend on the whether and how to split its
corresponding luma block. [0358] 12. The above methods including
proposed methods and BIO, DMVR or other decoder side motion
refinement technologies, or sub-block based methods (e.g., affine,
ATMVP etc.) may be applied in a sub-block level. [0359] a) In one
example, the iterative motion vector refinement for BIO and DMVR in
bullet 1 and bullet 2 may be invoked for each sub-block. [0360] b)
In one example, when a block with either width or height or both
width and height are both larger than (or equal to) a threshold L,
the block may be split into multiple sub-blocks. Each sub-block is
treated in the same way as a normal coding block with size equal to
the sub-block size. [0361] i. In one example, L is 64, a
64.times.128/128.times.64 block is split into two 64.times.64
sub-blocks, and a 128.times.128 block is split into four
64.times.64 sub-blocks. However, N.times.128/128.times.N block,
wherein N<64, is not split into sub-blocks. [0362] ii. In one
example, L is 64, a 64.times.128/128.times.64 block is split into
two 64.times.64 sub-blocks, and a 128.times.128 block is split into
four 64.times.64 sub-blocks. Meanwhile, N.times.128/128.times.N
block, wherein N<64, is split into two N.times.64/64.times.N
sub-blocks. [0363] iii. In one example, when width (or height) is
larger than L, it is split vertically (or horizontally), and the
width or/and height of the sub-block is no larger than L.
[0364] iv. In one example, L may be different for vertical
direction and horizontal direction. For example, if width of block
is larger than LW, the block may be split vertically; if height of
a block is larger than LH, the block may be split horizontally.
[0365] v. In one example, LW may be width of the VPDU (virtual
pipeline data unit) and LH may be height of the VPDU. [0366] c) In
one example, when size (i.e., width*height) of block is larger than
a threshold L1, it may be split into multiple sub-blocks. Each
sub-block is treated in the same way as a normal coding block with
size equal to the sub-block size. [0367] i. In one example, the
block is split into sub-blocks with same size that is no larger
than L1. [0368] ii. In one example, if width (or height) of the
block is no larger than a threshold L2, it is not split vertically
(or horizontally). [0369] iii. In one example, L1 is size of the
VPDU. [0370] iv. In one example, L1 is 1024, and L2 is 32. For
example, a 16.times.128 block is split into two 16.times.64
sub-blocks. [0371] v. In one example, L2=sqrt(L1). [0372] vi. In
one example, if block size (width and height denoted by W and H,
respectively) is larger than L1, width (denoted by subW) and height
(denoted by subH) of a sub-block is derived as follows: [0373] If
W>=L2 and H>=L2
[0373] subW=W/L2;
subH=H/L2; [0374] Else if W>L2 and H<L2
[0374] subH=H;
subW=W*H/L1; [0375] Else if W<L2 and H>L2
[0375] subW=W;
subH=W*H/L1; [0376] d) In one example, two-level splitting of one
block may be applied wherein different rules may be applied to
decide how to do splitting. [0377] i. In one example, a block may
be first split into sub-blocks using method in bullet 12.b, and
these sub-blocks may be further split using method in bullet 12.c.
[0378] ii. In one example, a block may be first split into
sub-blocks using method in bullet 12.c, and these sub-blocks may be
further split using method in bullet 12.b. [0379] e) The threshold
L may be pre-defined or signaled in SPS/PPS/picture/slice/tile
group/tile level. [0380] f) Alternatively, the thresholds may
depend on certain coded information, such as block size, picture
type, temporal layer index, etc. al. [0381] g) In one example,
deblocking may be performed at boundary of these sub-blocks. [0382]
13. It is proposed that DMVD and may be disabled in
multi-hypothesis intra and inter prediction. [0383] a)
Alternatively, DMVD may be enabled in multi-hypothesis intra and
inter prediction. [0384] 14. It is proposed that DMVD may be
disabled in MMVD (merge mode with MVD) or UMVE mode. [0385] a)
Alternatively, DMVR may be enabled in MMVD (merge mode with MVD) or
UMVE mode. [0386] 15. It is proposed that DMVD and may be disabled
in triangle prediction. [0387] a) Alternatively, DMVR may be
enabled in triangle prediction. [0388] 16. In one example, whether
to and how to apply motion refinement methods such as DMVR or/and
BIO and/or other decoder side motion refinement technologies
depends on the reference picture. [0389] a) In one example, motion
refinement methods are not applied if the reference picture is the
current coding picture; [0390] b) In one example, multi-time motion
refinement methods claimed in previous bullets are not applied if
the reference picture is the current coding picture; [0391] c)
whether to and how to apply motion refinement methods such as DMVR
or/and BIO and/or other decoder side motion refinement technologies
depends on the positions of sub-blocks relative to the block
covering the sub-block, and/or relative to the coding tree unit
(CTU), and/or relative to the top-left position of the
tile/picture. [0392] 17. It is proposed that in the early
termination stage of BIO or/and DMVR or other coding tools rely on
difference calculation, the difference (e.g.,
SAD/SATD/SSE/MRSAD/MRSATD/MRSSE etc.) between the two reference
blocks or/and sub-blocks may be calculated only for some
representative positions. [0393] a) In one example, only difference
of even rows is calculated for the block or/and sub-block. [0394]
b) In one example, only difference of four corner samples of one
block/sub-block is calculated for the block or/and sub-block.
[0395] c) In one example, method in improvements of decoder side
motion vector derivation in video coding may be used to select the
representative positions. [0396] d) In one example, the difference
(e.g., SAD/SATD/SSE/MRSAD/MRSATD/MRSSE etc.) between the two
reference blocks may be calculated only for some representative
sub-blocks. [0397] e) In one example, the difference (e.g.,
SAD/SATD/SSE/MRSAD/MRSATD/MRSSE etc.) calculated for representative
positions or sub-blocks are summed up to get the difference for the
whole block/sub-block. [0398] 18. In one example, the difference
between the two reference blocks is calculated directly (instead of
being calculated as sum of differences between the reference
sub-blocks) and is used to decide whether BIO or/and DMVR or other
coding tools rely on difference calculation is enabled or disabled
for the entire block. [0399] a) In one example, methods described
in bullet 14 may be used to calculate the difference between the
two reference blocks. [0400] 19. Embodiment [0401] This section
presents an embodiment for how to split a block into sub-blocks in
sub-block level DMVD. [0402] Embodiment #1 [0403] a) Step 1: if a
block is of size 128.times.128, it is split into 4 64.times.64
sub-blocks. If a block is of size N.times.128 or 128.times.N
(N<128), it is split into 2 N.times.64 or 64.times.N sub-blocks.
For other blocks, they are not split. [0404] b) Step 2: for block
that is not of size 128.times.128, or N.times.128 or 128.times.N
(N<128), and for sub-block generated in step 1, if its size
(i.e., width*height) is larger than 256, it is further split into
sub-blocks of size 256 using method described in 12.c, with L1=256
and L2=16. [0405] Embodiment #2 [0406] a) Step 1: if a block is of
size 128.times.128, it is split into 4 64.times.64 sub-blocks. If a
block is of size N.times.128 or 128.times.N, it is split into 2
N.times.64 or 64.times.N sub-blocks (N<128). For other blocks,
they are not split. [0407] b) Step 2: for block that is not of size
128.times.128, or N.times.128 or 128.times.N (N<128), and for
sub-block generated in step 1, if its size (i.e., width*height) is
larger than 1024, it is further split into sub-blocks of size 1024
using method described in 12.c, with L1=1024 and L2=32. [0408]
Embodiment #3 [0409] a) Step 1: if a block is of size
128.times.128, it is split into 4 64.times.64 sub-blocks. If a
block is of size N.times.128 or 128.times.N, it is split into 2
N.times.64 or 64.times.N sub-blocks (N<128). For other blocks,
they are not split. [0410] Embodiment #4 [0411] a) Step 1: if a
block is of size 256.times.256, it is split into 4 128.times.128
sub-blocks. If a block is of size N.times.256 or 256.times.N, it is
split into 2 N.times.128 or 128.times.N sub-blocks (N<256). For
other blocks, they are not split. [0412] b) Step 2: for block that
is not of size 256.times.256, or N.times.256 or 256.times.N
(N<256), and for sub-block generated in step 1, if its size
(i.e., width*height) is larger than 1024, it is further split into
sub-blocks of size 1024 using method described in 12.c, with
L1=1024 and L2=32. [0413] Embodiment #5 [0414] a) Step 1: if width
or height of a block is larger than 64, it is split into sub-blocks
using method described in 12.b, with LW=LH=64. [0415] b) Step 2:
for block whose width and height are no larger than 64, and for
sub-block generated in step 1, if its size (i.e., width*height) is
larger than 1024, it is further split into sub-blocks of size 1024
using method described in 12.c, with L1=1024 and L2=32.
[0416] FIG. 29 is a block diagram illustrating an example of the
architecture for a computer system or other control device 2600
that can be utilized to implement various portions of the presently
disclosed technology. In FIG. 29, the computer system 2600 includes
one or more processors 2605 and memory 2610 connected via an
interconnect 2625. The interconnect 2625 may represent any one or
more separate physical buses, point to point connections, or both,
connected by appropriate bridges, adapters, or controllers. The
interconnect 2625, therefore, may include, for example, a system
bus, a Peripheral Component Interconnect (PCI) bus, a
HyperTransport or industry standard architecture (ISA) bus, a small
computer system interface (SCSI) bus, a universal serial bus (USB),
IIC (I2C) bus, or an Institute of Electrical and Electronics
Engineers (IEEE) standard 674 bus, sometimes referred to as
"Firewire."
[0417] The processor(s) 2605 may include central processing units
(CPUs) to control the overall operation of, for example, the host
computer. In certain embodiments, the processor(s) 2605 accomplish
this by executing software or firmware stored in memory 2610. The
processor(s) 2605 may be, or may include, one or more programmable
general-purpose or special-purpose microprocessors, digital signal
processors (DSPs), programmable controllers, application specific
integrated circuits (ASICs), programmable logic devices (PLDs), or
the like, or a combination of such devices.
[0418] The memory 2610 can be or include the main memory of the
computer system. The memory 2610 represents any suitable form of
random access memory (RAM), read-only memory (ROM), flash memory,
or the like, or a combination of such devices. In use, the memory
2610 may contain, among other things, a set of machine instructions
which, when executed by processor 2605, causes the processor 2605
to perform operations to implement embodiments of the presently
disclosed technology.
[0419] Also connected to the processor(s) 2605 through the
interconnect 2625 is a (optional) network adapter 2615. The network
adapter 2615 provides the computer system 2600 with the ability to
communicate with remote devices, such as the storage clients,
and/or other storage servers, and may be, for example, an Ethernet
adapter or Fiber Channel adapter.
[0420] FIG. 30 shows a block diagram of an example embodiment of a
device 2700 that can be utilized to implement various portions of
the presently disclosed technology. The mobile device 2700 can be a
laptop, a smartphone, a tablet, a camcorder, or other types of
devices that are capable of processing videos. The mobile device
2700 includes a processor or controller 2701 to process data, and
memory 2702 in communication with the processor 2701 to store
and/or buffer data. For example, the processor 2701 can include a
central processing unit (CPU) or a microcontroller unit (MCU). In
some implementations, the processor 2701 can include a
field-programmable gate-array (FPGA). In some implementations, the
mobile device 2700 includes or is in communication with a graphics
processing unit (GPU), video processing unit (VPU) and/or wireless
communications unit for various visual and/or communications data
processing functions of the smartphone device. For example, the
memory 2702 can include and store processor-executable code, which
when executed by the processor 2701, configures the mobile device
2700 to perform various operations, e.g., such as receiving
information, commands, and/or data, processing information and
data, and transmitting or providing processed information/data to
another device, such as an actuator or external display. To support
various functions of the mobile device 2700, the memory 2702 can
store information and data, such as instructions, software, values,
images, and other data processed or referenced by the processor
2701. For example, various types of Random Access Memory (RAM)
devices, Read Only Memory (ROM) devices, Flash Memory devices, and
other suitable storage media can be used to implement storage
functions of the memory 2702. In some implementations, the mobile
device 2700 includes an input/output (I/O) unit 2703 to interface
the processor 2701 and/or memory 2702 to other modules, units or
devices. For example, the I/O unit 2703 can interface the processor
2701 and memory 2702 with to utilize various types of wireless
interfaces compatible with typical data communication standards,
e.g., such as between the one or more computers in the cloud and
the user device. In some implementations, the mobile device 2700
can interface with other devices using a wired connection via the
I/O unit 2703. The mobile device 2700 can also interface with other
external interfaces, such as data storage, and/or visual or audio
display devices 2704, to retrieve and transfer data and information
that can be processed by the processor, stored in the memory, or
exhibited on an output unit of a display device 2704 or an external
device. For example, the display device 2704 can display a video
frame modified based on the MVPs in accordance with the disclosed
technology.
[0421] FIG. 31 is a flowchart for a method 3100 of video
processing. The method 3100 includes generating (3102), using a
multi-step refinement process, multiple refinement values of motion
vector information based on decoded motion information from a
bitstream representation of a current video block, and
reconstructing (3104) the current video block or decoding other
video blocks based on multiple refinement values.
[0422] Another video processing method includes performing, for
conversion between a current block and a bitstream representation
of the current block, a multi-step refinement process for a first
sub-block of the current block and a temporal gradient modification
process for a second sub-block of the current block, wherein, using
the multi-step refinement process, multiple refinement values of
motion vector information signaled in a bitstream representation of
the current video block and performing the conversion between the
current block and the bitstream representation using a selected one
of the multiple refinement values.
[0423] Another video processing method includes determining, using
a multi-step decoder-side motion vector refinement process a
current video block, a final motion vector and performing
conversion between the current block and the bitstream
representation using the final motion vector.
[0424] Another video processing method includes applying, during
conversion between a current video block and a bitstream
representation of the current video block; multiple different
motion vector refinement processes to different sub-blocks of the
current video block and performing conversion between the current
block and the bitstream representation using a final motion vector
for the current video block generated from the multiple different
motion vector refinement processes.
[0425] In the disclosed embodiments, the bitstream representation
of a current block of video may include bits of a bitstream
(compressed representation of a video) that may be non-contiguous
and may depend on header information, as is known in the art of
video compression. Furthermore, a current block may include samples
representative of one or more of luma and chroma components, or
rotational variations thereof (e.g, YCrCb or YUV, and so on).
[0426] The listing of clauses below describes some embodiments and
techniques as follows.
[0427] 1. A method of video processing, comprising: generating,
using a multi-step refinement process, multiple refinement values
of motion vector information based on decoded motion information
from a bitstream representation of a current video block, and
reconstructing the current video block or decoding other video
blocks based on multiple refinement values. For example, the
refinement operation may include an averaging operation. In yet
another example, refined values of one step of refinement process
are used to reconstruct the current video block. In yet another
example, refined values of one step of refinement process are used
for decoding other video blocks.
[0428] 2. The method of clause 1, wherein the conversion generates
the current block from the bitstream representation
[0429] 3. The method of clause 1, wherein the conversion generates
the bitstream representation from the current block.
[0430] 4. The method of any of clauses 1 to 3, wherein the
multi-step refinement process includes using refinement values of
(i-1)th step for generating refinement values of ith step, wherein
i=1 to N, where N is a total number of refinement steps performed
during the multi-step refinement process and wherein N is greater
than 1.
[0431] 5. The method of any of clauses 1 to 4, wherein the
performing conversion includes generating, in a step of the
multi-step refinement process, a motion compensated reference block
for the current block using refinement values of the motion vector
information for the step.
[0432] 6. The method of clause 5, wherein the generating the motion
compensated reference block uses different filters for some steps
of the multi-step refinement process.
[0433] 7. The method of any of clauses 1 to 6, wherein the
performing the conversion includes generating, at each step, a
reference block using a refined motion vector generated for that
step.
[0434] 8. The method of clause 7, wherein the reference block is
generated by first rounding the refined motion vector to an integer
value.
[0435] 9. The method of clause 1, wherein a step of the multi-step
refinement process includes splitting the current block into
multiple sub-blocks and performing an additional multi-step
refinement process for at least some of the multiple
sub-blocks.
[0436] 10. The method of clause 9, wherein the additional
multi-step refinement process is performed for all of the
multiple-sub-blocks.
[0437] 11. The method of clause 1, wherein the splitting the
current block is performed after implementing a number of steps of
the multi-step refinement process based on a characteristic of the
current block.
[0438] 12. The method of any of clauses 9 to 12 wherein the
splitting the current block includes splitting the current block in
a step-dependent manner such that, in at least two steps,
sub-blocks of different sizes are used for the additional
multi-step refinement process.
[0439] 13. The method of any of clauses 1 to 12, wherein the
selected one of the multiple refinement values is computed by a
weighted average of the multiple refinement values.
[0440] 14. The method of clause 1, wherein a number of steps
performed prior to terminating the multi-step refinement process is
decided based on changes in refinement values in successive steps
exceeding a measure.
[0441] 15. The method of clause 14, wherein the measure includes an
absolute difference between a vertical or a horizontal component of
a refinement value at a step and the motion information signaled in
the bitstream.
[0442] 16. The method of clause 14, wherein the number of steps
performed prior to terminating the multi-step refinement process is
a function of a characteristic of the current block.
[0443] 17. The method of clause 16, wherein the characteristic of
the current block includes a size of the current block or a coding
mode of the current block or a value of motion vector difference of
the current block or a quantization parameter.
[0444] 18. The method of clause 14, wherein the number of steps
performed prior to terminating the multi-step refinement process is
signaled in the bitstream at a video level, a sequence level, a
picture level, a slice level, a tile level, coding tree unit level
or a coding unit level.
[0445] 19. The method of any of clauses 1 to 18, wherein the
selected one of the multiple refinement values is a refinement
value calculated at a final step of the multi-step refinement
process.
[0446] 20. The method of any of clauses 1 to 19, wherein, during
each step of the multi-step refinement process, an intermediate
motion vector is generated from a previous step and used for
calculating a refined estimate for a next step after the previous
step.
[0447] 21. A method of video processing, comprising: [0448]
performing, for conversion between a current block and a bitstream
representation of the current block, a multi-step refinement
process for a sub-block of the current block and a temporal
gradient modification process between two prediction blocks of the
sub-block, wherein, using the multi-step refinement process,
multiple refinement values of motion vector information based on
decoded motion information from a bitstream representation of the
current video block; and [0449] performing the conversion between
the current block and the bitstream representation based on
refinement values.
[0450] 22. The method of clause 21, wherein second sub-block
comprises M1.times.N1 pixels and the first sub-block comprises
M2.times.N2 pixels where M1, N1, M2 and N2 are integers.
[0451] 23. The method of clause 22, wherein M1=N1=8, M2=N2=4.
[0452] 24. The method of any of clauses 21 to 23, wherein M1, N1,
M2, N2 are pre-defined or depend on a size of the current block or
a coding mode of the current block or signaled at a video level,
sequence level, picture level, a slice level, a picture header
level or a tile level.
[0453] 25. The method of any of clauses 1 to 24, wherein each step
of the multi-step refinement process uses partial pixels of the
current block or a sub-block of the current block.
[0454] 26. The method of clause 25, wherein the multi-step
refinement process uses pixels from every Nth row or column.
[0455] 27. A method of video processing, comprising: determining,
using a multi-step decoder-side motion vector refinement process a
current video block, a final motion vector; and performing
conversion between the current block and the bitstream
representation using the final motion vector.
[0456] 28. The method of clause 27, wherein the multi-step
decoder-side motion vector refinement process for the current
macroblock is performed on a refinement values at an ith step of
the multi-step refinement process recited in Clause 1, where i is
an integer.
[0457] 29. The method of any of clauses 27 to 28, wherein
interpolation filters used for the multi-step refinement process
are different from interpolation filters used for conversion of
another block.
[0458] 30. The method of any of clauses 1 to 26, wherein the
selected one of the multiple refinement values is used for motion
compensation of a subset of luma and Cr, Cb chroma components.
[0459] 31. The method of clause 30, wherein the subset corresponds
to the luma component.
[0460] 32. The method of clause 30, wherein the subset corresponds
to the chroma components.
[0461] 33. The method of clause 32, wherein the motion component of
the chroma components comprises a low bandwidth motion compensation
process in which a subsampled version of the chroma components is
used for motion compensation.
[0462] 34. A method of video processing, comprising: applying,
during conversion between a current video block and a bitstream
representation of the current video block; multiple different
motion vector refinement processes to different sub-blocks of the
current video block; and performing conversion between the current
block and the bitstream representation using a final motion vector
for the current video block generated from the multiple different
motion vector refinement processes.
[0463] 35. The method of clause 34, wherein the conversion
generates the current block from the bitstream representation
[0464] 36. The method of clause 34, wherein the conversion
generates the bitstream representation from the current block.
[0465] 37. The method of any of clauses 34 to 36, wherein the
multiple different motion vector refinement processes include a
bi-directional optical flow process or a multi-step refinement
process.
[0466] 38. The method of any of clauses 34 to 37, wherein the
multiple different motion vector refinement processes are
selectively applied to the sub-blocks based on size or coding mode
of the sub-blocks or positions of the sub-blocks within the current
block or a type of reference picture used for coding the current
block.
[0467] 39. The method of clause 38, further including refraining
from applying the multiple different motion vector refinement
processes in case that the reference picture is a current coding
picture of the current block.
[0468] 40. The method of any of clause 1 to 39, wherein the current
block corresponds to a prediction unit or a coding unit or a block
or a sub-block.
[0469] 41. A method of video processing, comprising: [0470]
performing a conversion between a current video block and a
bitstream representation of the current video block using a rule
that limits a maximum number of sub-blocks that a coding unit or a
prediction unit in case that the current video block is coded using
a sub-block based coding tool, wherein the sub-block based coding
tool includes one or more of affine coding, advanced temporal
motion vector predictor, bi-directional optical flow or a
decoder-side motion vector refinement coding tool.
[0471] 42. The method of clause 41, wherein the rule further
specifies a width or a height of the sub-blocks.
[0472] 43. The method of clause 42, wherein the current video block
includes M.times.N pixels and wherein the sub-blocks have size
K.times.L pixels, where K, L, M, N are integers.
[0473] 44. The method of clause 43, wherein the width of each
sub-block is maximum of THw and M/K, wherein THw is an integer
number.
[0474] 45. The method of any of clauses 43-44, wherein the width of
each sub-block is maximum of THh and N/L, wherein THn is an integer
number.
[0475] 46. A method of video processing, comprising: [0476]
performing a conversion between a current video block and a
bitstream representation of the current video block using a rule
that specifies to use different partitioning for chroma components
of the current video block than a luma component of the current
video block in case that the current video block is coded using a
sub-block based coding tool, wherein the sub-block based coding
tool includes one or more of affine coding, advanced temporal
motion vector predictor, bi-directional optical flow or a
decoder-side motion vector refinement coding tool.
[0477] 47. The method of clause 46, wherein the rule specifies
partitioning for chroma components based on a size and a width of
the current video block or the chroma component of the current
video block.
[0478] 48. The method of any of clauses 41-47, wherein the
conversion comprises generating pixel values of the current video
block from the bitstream representation.
[0479] 49. The method of any of clauses 41-47, wherein the
conversion comprises generating the bitstream representation from
pixel values of the current video block.
[0480] 50. The method of clause 9, wherein splitting the current
block into multiple sub-blocks includes determining a size of the
multiple sub-blocks based on a size of the current video block.
[0481] 51. A method of video processing, comprising determining, in
an early termination stage of a bi-directional optical flow (BIO)
technique or a decoder-side motion vector refinement (DMVR)
technique, differences between reference video blocks associated
with a current video block. Further processing of the current video
block based on the differences can be performed.
[0482] 52. The method of clause 51, wherein determining the
differences is based on even rows of the reference video
blocks.
[0483] 53. The method of clause 51, wherein determining the
differences is based on corner samples of the reference video
blocks.
[0484] 54. The method of clause 51, wherein determining the
differences is based on sub-blocks of the reference video
blocks.
[0485] 55. The method of clause 51, wherein the reference video
blocks include a first reference video block and a second reference
video block, the differences based on a summation of the
differences between the first reference video block and the second
reference video block.
[0486] 56. The method of clauses 51-55, wherein the differences
include one or more of: sum of absolute differences (SAD), sum of
absolute transformed differences (SATD), sum of squares error
(SSE), mean removed sum of absolute differences (MRSAD), mean
removed sum of absolute transformed differences (MRSATD), or mean
removed sum of squares error (MRSSE).
[0487] 57. A video processing apparatus comprising a processor
configured to implement a method recited in any one or more of
clauses 1 to 56.
[0488] 58. A computer program product stored on a non-transitory
computer readable media, the computer program product including
program code for carrying out the method in any one of clauses 1 to
56.
[0489] FIG. 32 is a flowchart for a method 3200 of video
processing. The method 3200 includes calculating (3202), during a
conversion between a current block of video and a bitstream
representation of the current block, differences between two
reference blocks associated with the current block or differences
between two reference sub-blocks associated with a sub-block within
the current block based on representative positions of the
reference blocks or representative positions of the reference
sub-blocks, and performing (3204) the conversion based on the
differences.
[0490] In some example, calculating the differences comprises
calculating differences of interlaced positions of the two
reference blocks and/or two reference sub-blocks.
[0491] In some example, calculating the differences comprising
calculating differences of even rows of the two reference blocks
and/or two reference sub-blocks.
[0492] In some example, calculating the differences comprising
calculating difference of four corner samples of the two reference
blocks and/or two reference sub-blocks.
[0493] In some example, calculating the differences comprising
calculating differences between two reference blocks based on
representative sub-blocks within the reference blocks.
[0494] In some example, the representative positions are selected
by using predetermined strategy.
[0495] In some example, the performing the conversion based on the
differences comprises: summing up the differences calculated for
the representative positions of the reference sub-blocks to obtain
the difference for the sub-block.
[0496] In some example, the performing the conversion based on the
differences comprises: summing up the differences calculated for
the representative positions of the reference blocks to obtain the
difference for the current block.
[0497] In some example, calculating the differences in an early
termination stage of a motion vector refinement processing or a
prediction refinement processing relying on difference
calculation.
[0498] In some example, the performing the conversion based on the
differences comprises: summing up the differences calculated for
the representative positions to obtain the difference for the
current block; determining whether a motion vector refinement
processing or a prediction refinement processing relying on
difference calculation is enabled or disabled for the current block
based on the difference of the current block.
[0499] In some example, the prediction refinement processing
includes bi-directional optical flow (BIO) technique, and/or the
motion vector refinement processing includes a decoder-side motion
vector refinement (DMVR) technique or a frame-rate up conversion
(FRUC) technique.
[0500] In some example, the differences include one or more of: sum
of absolute differences (SAD), sum of absolute transformed
differences (SATD), sum of squares error (SSE), mean removed sum of
absolute differences (MRSAD), mean removed sum of absolute
transformed differences (MRSATD), or mean removed sum of squares
error (MRSSE).
[0501] FIG. 33 is a flowchart for a method 3300 of video
processing. The method 3300 includes: making (3302) a decision,
based on a determination that a current block of a video is coded
using a specific coding mode, regarding a selective enablement of a
decoder side motion vector derivation (DMVD) tool for the current
block, wherein the DMVD tool derives a refinement of motion
information signaled in a bitstream representation of the video;
and performing (3304), based on the decision, a conversion between
the current block and the bitstream representation.
[0502] In some example, the DMVD tool is disabled upon a
determination that prediction signal of the current block is
generated at least based on an intra prediction signal and an inter
prediction signal.
[0503] In some example, the DMVD tool is enabled upon a
determination that prediction signal of the current block is
generated at least based on an intra prediction signal and an inter
prediction signal.
[0504] In some example, the current block is coded in a Combined
Inter and Intra Prediction (CIIP) mode.
[0505] In some example, the DMVD tool is disabled upon a
determination that the current block is coded with a Merge mode and
motion vector differences.
[0506] In some example, the DMVD tool is enabled upon a
determination that the current block is coded with a Merge mode and
Motion Vector Differences.
[0507] In some example, the current block is coded in a Merge mode
with Motion Vector Difference (MMVD) mode.
[0508] In some example, the DMVD tool is disabled upon a
determination that the current block is coded with multiple
sub-regions and at least one of them is non-rectangular.
[0509] In some example, the DMVD tool is enabled upon a
determination that the current block is coded with multiple
sub-regions and at least one of them is non-rectangular.
[0510] In some example, the current block is coded with the
triangular prediction mode.
[0511] In some example, the DMVD tool comprises a decoder side
motion vector refinement (DMVR) tool.
[0512] In some example, the DMVD tool comprises a bi-directional
optical flow (BDOF) tool.
[0513] In some example, the DMVD tool comprises a frame-rate up
conversion (FRUC) tool or other decoder-sider motion vector
refinement method or sample refinement method.
[0514] In some example, the conversion generates the current block
from the bitstream representation.
[0515] In some example, the conversion generates the bitstream
representation from the current block.
[0516] The disclosed and other embodiments, modules and the
functional operations described in this document can be implemented
in digital electronic circuitry, or in computer software, firmware,
or hardware, including the structures disclosed in this document
and their structural equivalents, or in combinations of one or more
of them. The disclosed and other embodiments can be implemented as
one or more computer program products, i.e., one or more modules of
computer program instructions encoded on a computer readable medium
for execution by, or to control the operation of, data processing
apparatus. The computer readable medium can be a machine-readable
storage device, a machine-readable storage substrate, a memory
device, a composition of matter effecting a machine-readable
propagated signal, or a combination of one or more them. The term
"data processing apparatus" encompasses all apparatus, devices, and
machines for processing data, including by way of example a
programmable processor, a computer, or multiple processors or
computers. The apparatus can include, in addition to hardware, code
that creates an execution environment for the computer program in
question, e.g., code that constitutes processor firmware, a
protocol stack, a database management system, an operating system,
or a combination of one or more of them. A propagated signal is an
artificially generated signal, e.g., a machine-generated
electrical, optical, or electromagnetic signal, that is generated
to encode information for transmission to suitable receiver
apparatus.
[0517] A computer program (also known as a program, software,
software application, script, or code) can be written in any form
of programming language, including compiled or interpreted
languages, and it can be deployed in any form, including as a
stand-alone program or as a module, component, subroutine, or other
unit suitable for use in a computing environment. A computer
program does not necessarily correspond to a file in a file system.
A program can be stored in a portion of a file that holds other
programs or data (e.g., one or more scripts stored in a markup
language document), in a single file dedicated to the program in
question, or in multiple coordinated files (e.g., files that store
one or more modules, sub programs, or portions of code). A computer
program can be deployed to be executed on one computer or on
multiple computers that are located at one site or distributed
across multiple sites and interconnected by a communication
network.
[0518] The processes and logic flows described in this document can
be performed by one or more programmable processors executing one
or more computer programs to perform functions by operating on
input data and generating output. The processes and logic flows can
also be performed by, and apparatus can also be implemented as,
special purpose logic circuitry, e.g., an FPGA (field programmable
gate array) or an ASIC (application specific integrated
circuit).
[0519] Processors suitable for the execution of a computer program
include, by way of example, both general and special purpose
microprocessors, and any one or more processors of any kind of
digital computer. Generally, a processor will receive instructions
and data from a read only memory or a random-access memory or both.
The essential elements of a computer are a processor for performing
instructions and one or more memory devices for storing
instructions and data. Generally, a computer will also include, or
be operatively coupled to receive data from or transfer data to, or
both, one or more mass storage devices for storing data, e.g.,
magnetic, magneto optical disks, or optical disks. However, a
computer need not have such devices. Computer readable media
suitable for storing computer program instructions and data include
all forms of non-volatile memory, media and memory devices,
including by way of example semiconductor memory devices, e.g.,
EPROM, EEPROM, and flash memory devices; magnetic disks, e.g.,
internal hard disks or removable disks; magneto optical disks; and
CD ROM and DVD-ROM disks. The processor and the memory can be
supplemented by, or incorporated in, special purpose logic
circuitry.
[0520] While this patent document contains many specifics, these
should not be construed as limitations on the scope of any
invention or of what may be claimed, but rather as descriptions of
features that may be specific to particular embodiments of
particular inventions. Certain features that are described in this
patent document in the context of separate embodiments can also be
implemented in combination in a single embodiment. Conversely,
various features that are described in the context of a single
embodiment can also be implemented in multiple embodiments
separately or in any suitable subcombination. Moreover, although
features may be described above as acting in certain combinations
and even initially claimed as such, one or more features from a
claimed combination can in some cases be excised from the
combination, and the claimed combination may be directed to a
subcombination or variation of a subcombination.
[0521] Similarly, while operations are depicted in the drawings in
a particular order, this should not be understood as requiring that
such operations be performed in the particular order shown or in
sequential order, or that all illustrated operations be performed,
to achieve desirable results. Moreover, the separation of various
system components in the embodiments described in this patent
document should not be understood as requiring such separation in
all embodiments.
[0522] Only a few implementations and examples are described and
other implementations, enhancements and variations can be made
based on what is described and illustrated in this patent
document.
* * * * *