U.S. patent application number 17/362048 was filed with the patent office on 2022-04-14 for method and apparatus for video coding.
This patent application is currently assigned to TENCENT AMERICA LLC. The applicant listed for this patent is TENCENT AMERICA LLC. Invention is credited to Yixin DU, Shan LIU, Sehoon YEA, Xin ZHAO.
Application Number | 20220116635 17/362048 |
Document ID | / |
Family ID | |
Filed Date | 2022-04-14 |
View All Diagrams
United States Patent
Application |
20220116635 |
Kind Code |
A1 |
YEA; Sehoon ; et
al. |
April 14, 2022 |
METHOD AND APPARATUS FOR VIDEO CODING
Abstract
Aspects of the disclosure include methods, apparatuses, and
non-transitory computer-readable storage mediums for video
encoding/decoding. An apparatus includes processing circuitry that
determines directionality information of a restoration filter unit
included in a video frame based on at least one of a constrained
directional enhancement filter (CDEF) process or an intra
prediction mode. The processing circuitry determines one of a
plurality of sets of filter parameters of a restoration filter
process based on the directionality information of the restoration
filter unit. The processing circuitry performs the restoration
filter process on the restoration filter unit based on the one of
the plurality of sets of filter parameters. The processing
circuitry reconstructs the video frame based on the filtered
restoration filter unit.
Inventors: |
YEA; Sehoon; (Palo Alto,
CA) ; DU; Yixin; (Los Altos, CA) ; ZHAO;
Xin; (Santa Clara, CA) ; LIU; Shan; (San Jose,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
TENCENT AMERICA LLC |
Palo Alto |
CA |
US |
|
|
Assignee: |
TENCENT AMERICA LLC
Palo Alto
CA
|
Appl. No.: |
17/362048 |
Filed: |
June 29, 2021 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
63091707 |
Oct 14, 2020 |
|
|
|
International
Class: |
H04N 19/44 20060101
H04N019/44; H04N 19/117 20060101 H04N019/117; H04N 19/82 20060101
H04N019/82; H04N 19/105 20060101 H04N019/105; H04N 19/159 20060101
H04N019/159; H04N 19/176 20060101 H04N019/176 |
Claims
1. A method of video decoding in a decoder, comprising: determining
directionality information of a restoration filter unit included in
a video frame based on at least one of a constrained directional
enhancement filter (CDEF) process or an intra prediction mode;
determining one of a plurality of sets of filter parameters of a
restoration filter process based on the directionality information
of the restoration filter unit; performing the restoration filter
process on the restoration filter unit based on the one of the
plurality of sets of filter parameters; and reconstructing the
video frame based on the filtered restoration filter unit.
2. The method of claim 1, wherein the restoration filter unit
includes one or more directionality information units and the at
least one of the CDEF process or the intra prediction mode is
performed on one of the one or more directionality information
units.
3. The method of claim 1, wherein each of the plurality of sets of
filter parameters of the restoration filter is associated with at
least one directionality of the CDEF process.
4. The method of claim 1, wherein the determining the one of the
plurality of sets of filter parameters includes determining the one
of the plurality of sets of filter parameters of the restoration
filter process based on the directionality information and block
variance information of the restoration filter unit.
5. The method of claim 1, wherein the determining the one of the
plurality of sets of filter parameters includes determining the one
of the plurality of sets of filter parameters of the restoration
filter process based on the directionality information of the
restoration filter unit and filter strengths of the CDEF
process.
6. The method of claim 1, wherein the determining the
directionality information of the restoration filter unit includes
determining the directionality information of the restoration
filter unit based on at least one of a majority vote or a
consistency check of directionalities in the restoration filter
unit.
7. The method of claim 1, wherein in response to the restoration
filter unit not being intra coded and a neighboring block of the
restoration filter unit being intra coded, the determining the
directionality information of the restoration filter unit includes
determining the directionality information of the restoration
filter unit based on the intra prediction mode which is performed
on the neighboring block of the restoration filter unit.
8. The method of claim 1, wherein the performing includes
performing the restoration filter process on the restoration filter
unit based on the directionality information determined from the
CDEF process matching the directionality information determined
from the intra prediction mode.
9. The method of claim 1, wherein the restoration filter process is
one of a Wiener filter process and a self-guided projection
(SGRPRJ) filter process.
10. The method of claim 1, wherein the determining the one of the
plurality of sets of filter parameters includes determining the one
of the plurality of sets of filter parameters of the restoration
filter process based on one of a default set of filter parameters,
a signaled index indicating a set of filter parameters, and a
signaled set of filter parameters.
11. An apparatus, comprising processing circuitry configured to:
determine directionality information of a restoration filter unit
included in a video frame based on at least one of a constrained
directional enhancement filter (CDEF) process or an intra
prediction mode; determine one of a plurality of sets of filter
parameters of a restoration filter process based on the
directionality information of the restoration filter unit; perform
the restoration filter process on the restoration filter unit based
on the one of the plurality of sets of filter parameters; and
reconstruct the video frame based on the filtered restoration
filter unit.
12. The apparatus of claim 11, wherein the restoration filter unit
includes one or more directionality information units and the at
least one of the CDEF process or the intra prediction mode is
performed on one of the one or more directionality information
units.
13. The apparatus of claim 11, wherein each of the plurality of
sets of filter parameters of the restoration filter is associated
with at least one directionality of the CDEF process.
14. The apparatus of claim 11, wherein the processing circuitry is
further configured to: determine the one of the plurality of sets
of filter parameters of the restoration filter process based on the
directionality information and block variance information of the
restoration filter unit.
15. The apparatus of claim 11, wherein the processing circuitry is
further configured to: determine the one of the plurality of sets
of filter parameters of the restoration filter process based on the
directionality information of the restoration filter unit and
filter strengths of the CDEF process.
16. The apparatus of claim 11, wherein the processing circuitry is
further configured to: determine the directionality information of
the restoration filter unit based on at least one of a majority
vote or a consistency check of directionalities in the restoration
filter unit.
17. The apparatus of claim 11, wherein the processing circuitry is
further configured to: in response to the restoration filter unit
not being intra coded and a neighboring block of the restoration
filter unit being intra coded, determine the directionality
information of the restoration filter unit based on the intra
prediction mode which is performed on the neighboring block of the
restoration filter unit.
18. The apparatus of claim 11, wherein the processing circuitry is
further configured to: perform the restoration filter process on
the restoration filter unit based on the directionality information
determined from the CDEF process matching the directionality
information determined from the intra prediction mode.
19. The apparatus of claim 11, wherein the restoration filter
process is one of a Wiener filter process and a self-guided
projection (SGRPRJ) filter process.
20. A non-transitory computer-readable storage medium storing
instructions which when executed by at least one processor cause
the at least one processor to perform: determining directionality
information of a restoration filter unit included in a video frame
based on at least one of a constrained directional enhancement
filter (CDEF) process or an intra prediction mode; determining one
of a plurality of sets of filter parameters of a restoration filter
process based on the directionality information of the restoration
filter unit; performing the restoration filter process on the
restoration filter unit based on the one of the plurality of sets
of filter parameters; and reconstructing the video frame based on
the filtered restoration filter unit.
Description
INCORPORATION BY REFERENCE
[0001] This present application claims the benefit of priority to
U.S. Provisional Application No. 63/091,707, "FEATURE INFORMATION
REUSE FOR ENHANCED RESTORATION FILTERING," filed on Oct. 14, 2020,
which is incorporated by reference herein in its entirety.
TECHNICAL FIELD
[0002] The present disclosure describes embodiments generally
related to video coding.
BACKGROUND
[0003] The background description provided herein is for the
purpose of generally presenting the context of the disclosure. Work
of the presently named inventors, to the extent the work is
described in this background section, as well as aspects of the
description that may not otherwise qualify as prior art at the time
of filing, are neither expressly nor impliedly admitted as prior
art against the present disclosure.
[0004] Video coding and decoding can be performed using
inter-picture prediction with motion compensation. Uncompressed
digital video can include a series of pictures, each picture having
a spatial dimension of, for example, 1920.times.1080 luminance
samples and associated chrominance samples. The series of pictures
can have a fixed or variable picture rate (informally also known as
frame rate) of, for example, 60 pictures per second or 60 Hz.
Uncompressed video has significant bitrate requirements. For
example, 1080p60 4:2:0 video at 8 bit per sample (1920.times.1080
luminance sample resolution at 60 Hz frame rate) requires close to
1.5 Gbit/s bandwidth. An hour of such video requires more than 600
GBytes of storage space.
[0005] One purpose of video coding and decoding can be the
reduction of redundancy in the input video signal, through
compression. Compression can help reduce the aforementioned
bandwidth or storage space requirements, in some cases by two
orders of magnitude or more. Both lossless and lossy compression,
as well as a combination thereof can be employed. Lossless
compression refers to techniques where an exact copy of the
original signal can be reconstructed from the compressed original
signal. When using lossy compression, the reconstructed signal may
not be identical to the original signal, but the distortion between
original and reconstructed signals is small enough to make the
reconstructed signal useful for the intended application. In the
case of video, lossy compression is widely employed. The amount of
distortion tolerated depends on the application; for example, users
of certain consumer streaming applications may tolerate higher
distortion than users of television distribution applications. The
compression ratio achievable can reflect that: higher
allowable/tolerable distortion can yield higher compression
ratios.
[0006] A video encoder and decoder can utilize techniques from
several broad categories, including, for example, motion
compensation, transform, quantization, and entropy coding.
[0007] Video codec technologies can include techniques known as
intra coding. In intra coding, sample values are represented
without reference to samples or other data from previously
reconstructed reference pictures. In some video codecs, the picture
is spatially subdivided into blocks of samples. When all blocks of
samples are coded in intra mode, that picture can be an intra
picture. Intra pictures and their derivations such as independent
decoder refresh pictures, can be used to reset the decoder state
and can, therefore, be used as the first picture in a coded video
bitstream and a video session, or as a still image. The samples of
an intra block can be exposed to a transform, and the transform
coefficients can be quantized before entropy coding. Intra
prediction can be a technique that minimizes sample values in the
pre-transform domain. In some cases, the smaller the DC value after
a transform is, and the smaller the AC coefficients are, the fewer
the bits that are required at a given quantization step size to
represent the block after entropy coding.
[0008] Traditional intra coding such as known from, for example
MPEG-2 generation coding technologies, does not use intra
prediction. However, some newer video compression technologies
include techniques that attempt, from, for example, surrounding
sample data and/or metadata obtained during the encoding and/or
decoding of spatially neighboring, and preceding in decoding order,
blocks of data. Such techniques are henceforth called "intra
prediction" techniques. Note that in at least some cases, intra
prediction is only using reference data from the current picture
under reconstruction and not from reference pictures.
[0009] There can be many different forms of intra prediction. When
more than one of such techniques can be used in a given video
coding technology, the technique in use can be coded in an intra
prediction mode. In certain cases, modes can have submodes and/or
parameters, and those can be coded individually or included in the
mode codeword. Which codeword to use for a given mode, submode,
and/or parameter combination can have an impact in the coding
efficiency gain through intra prediction, and so can the entropy
coding technology used to translate the codewords into a
bitstream.
[0010] A certain mode of intra prediction was introduced with
H.264, refined in H.265, and further refined in newer coding
technologies such as joint exploration model (JEM), versatile video
coding (VVC), and benchmark set (BMS). A predictor block can be
formed using neighboring sample values belonging to already
available samples. Sample values of neighboring samples are copied
into the predictor block according to a direction. A reference to
the direction in use can be coded in the bitstream or may be
predicted itself.
[0011] Referring to FIG. 1A, depicted in the lower right is a
subset of nine predictor directions known from H.265's 33 possible
predictor directions (corresponding to the 33 angular modes of the
35 intra modes). The point where the arrows converge (101)
represents the sample being predicted. The arrows represent the
direction from which the sample is being predicted. For example,
arrow (102) indicates that sample (101) is predicted from a sample
or samples to the upper right, at a 45 degree angle from the
horizontal. Similarly, arrow (103) indicates that sample (101) is
predicted from a sample or samples to the lower left of sample
(101), in a 22.5 degree angle from the horizontal.
[0012] Still referring to FIG. 1A, on the top left there is
depicted a square block (104) of 4.times.4 samples (indicated by a
dashed, boldface line). The square block (104) includes 16 samples,
each labelled with an "S", its position in the Y dimension (e.g.,
row index) and its position in the X dimension (e.g., column
index). For example, sample S21 is the second sample in the Y
dimension (from the top) and the first (from the left) sample in
the X dimension. Similarly, sample S44 is the fourth sample in
block (104) in both the Y and X dimensions. As the block is
4.times.4 samples in size, S44 is at the bottom right. Further
shown are reference samples that follow a similar numbering scheme.
A reference sample is labelled with an R, its Y position (e.g., row
index) and X position (column index) relative to block (104). In
both H.264 and H.265, prediction samples neighbor the block under
reconstruction; therefore no negative values need to be used.
[0013] Intra picture prediction can work by copying reference
sample values from the neighboring samples as appropriated by the
signaled prediction direction. For example, assume the coded video
bitstream includes signaling that, for this block, indicates a
prediction direction consistent with arrow (102)--that is, samples
are predicted from a prediction sample or samples to the upper
right, at a 45 degree angle from the horizontal. In that case,
samples S41, S32, S23, and S14 are predicted from the same
reference sample R05. Sample S44 is then predicted from reference
sample R08.
[0014] In certain cases, the values of multiple reference samples
may be combined, for example through interpolation, in order to
calculate a reference sample; especially when the directions are
not evenly divisible by 45 degrees.
[0015] The number of possible directions has increased as video
coding technology has developed. In H.264 (year 2003), nine
different direction could be represented. That increased to 33 in
H.265 (year 2013), and JEM/VVC/BMS, at the time of disclosure, can
support up to 65 directions. Experiments have been conducted to
identify the most likely directions, and certain techniques in the
entropy coding are used to represent those likely directions in a
small number of bits, accepting a certain penalty for less likely
directions. Further, the directions themselves can sometimes be
predicted from neighboring directions used in neighboring, already
decoded, blocks.
[0016] FIG. 1B shows a schematic (105) that depicts 65 intra
prediction directions according to JEM to illustrate the increasing
number of prediction directions over time.
[0017] The mapping of intra prediction directions bits in the coded
video bitstream that represent the direction can be different from
video coding technology to video coding technology; and can range,
for example, from simple direct mappings of prediction direction to
intra prediction mode, to codewords, to complex adaptive schemes
involving most probable modes, and similar techniques. In all
cases, however, there can be certain directions that are
statistically less likely to occur in video content than certain
other directions. As the goal of video compression is the reduction
of redundancy, those less likely directions will, in a well working
video coding technology, be represented by a larger number of bits
than more likely directions.
[0018] Motion compensation can be a lossy compression technique and
can relate to techniques where a block of sample data from a
previously reconstructed picture or part thereof (reference
picture), after being spatially shifted in a direction indicated by
a motion vector (MV henceforth), is used for the prediction of a
newly reconstructed picture or picture part. In some cases, the
reference picture can be the same as the picture currently under
reconstruction. MVs can have two dimensions X and Y, or three
dimensions, the third being an indication of the reference picture
in use (the latter, indirectly, can be a time dimension).
[0019] In some video compression techniques, an MV applicable to a
certain area of sample data can be predicted from other MVs, for
example from those related to another area of sample data spatially
adjacent to the area under reconstruction, and preceding that MV in
decoding order. Doing so can substantially reduce the amount of
data required for coding the MV, thereby removing redundancy and
increasing compression. MV prediction can work effectively, for
example, because when coding an input video signal derived from a
camera (known as natural video) there is a statistical likelihood
that areas larger than the area to which a single MV is applicable
move in a similar direction and, therefore, can in some cases be
predicted using a similar MV derived from MVs of a neighboring
area. That results in the MV found for a given area to be similar
or the same as the MV predicted from the surrounding MVs, and that
in turn can be represented, after entropy coding, in a smaller
number of bits than what would be used if coding the MV directly.
In some cases, MV prediction can be an example of lossless
compression of a signal (namely: the MVs) derived from the original
signal (namely: the sample stream). In other cases, MV prediction
itself can be lossy, for example because of rounding errors when
calculating a predictor from several surrounding MVs.
[0020] Various MV prediction mechanisms are described in H.265/HEVC
(ITU-T Rec. H.265, "High Efficiency Video Coding", December 2016).
Out of the many MV prediction mechanisms that H.265 offers,
described herein is a technique henceforth referred to as "spatial
merge."
[0021] Referring to FIG. 1C, a current block (111) can include
samples that have been found by the encoder during the motion
search process to be predictable from a previous block of the same
size that has been spatially shifted. Instead of coding that MV
directly, the MV can be derived from metadata associated with one
or more reference pictures, for example from the most recent (in
decoding order) reference picture, using the MV associated with
either one of five surrounding samples, denoted A0, A1, and B0, B1,
B2 (112 through 116, respectively). In H.265, the MV prediction can
use predictors from the same reference picture that the neighboring
block is using.
SUMMARY
[0022] Aspects of the disclosure provide apparatuses for video
encoding/decoding. An apparatus includes processing circuitry that
determines directionality information of a restoration filter unit
included in a video frame based on at least one of a constrained
directional enhancement filter (CDEF) process or an intra
prediction mode. The processing circuitry determines one of a
plurality of sets of filter parameters of a restoration filter
process based on the directionality information of the restoration
filter unit. The processing circuitry performs the restoration
filter process on the restoration filter unit based on the one of
the plurality of sets of filter parameters. The processing
circuitry reconstructs the video frame based on the filtered
restoration filter unit.
[0023] In an embodiment, the restoration filter unit includes one
or more directionality information units and the at least one of
the CDEF process or the intra prediction mode is performed on one
of the one or more directionality information units.
[0024] In an embodiment, each of the plurality of sets of filter
parameters of the restoration filter is associated with at least
one directionality of the CDEF process.
[0025] In an embodiment, the processing circuitry determines the
one of the plurality of sets of filter parameters of the
restoration filter process based on the directionality information
and block variance information of the restoration filter unit.
[0026] In an embodiment, the processing circuitry determines the
one of the plurality of sets of filter parameters of the
restoration filter process based on the directionality information
of the restoration filter unit and filter strengths of the CDEF
process.
[0027] In an embodiment, the processing circuitry determines the
directionality information of the restoration filter unit based on
at least one of a majority vote or a consistency check of
directionalities in the restoration filter unit.
[0028] In an embodiment, the processing circuitry determines the
directionality information of the restoration filter unit based on
the intra prediction mode which is performed on a neighboring block
of the restoration filter unit based on the restoration filter unit
not being intra coded and the neighboring block being intra
coded.
[0029] In an embodiment, the processing circuitry performs the
restoration filter process on the restoration filter unit based on
the directionality information determined from the CDEF process
matching the directionality information determined from the intra
prediction mode.
[0030] In an embodiment, the restoration filter process is one of a
Wiener filter process and a self-guided projection (SGRPRJ) filter
process.
[0031] In an embodiment, the processing circuitry determines the
one of the plurality of sets of filter parameters of the
restoration filter process based on one of a default set of filter
parameters, a signaled index indicating a set of filter parameters,
or a signaled set of filter parameters.
[0032] Aspects of the disclosure provide methods for video
encoding/decoding. In the method, directionality information of a
restoration filter unit included in a video frame is determined
based on at least one of a CDEF process or an intra prediction
mode. One of a plurality of sets of filter parameters of a
restoration filter process is determined based on the
directionality information of the restoration filter unit. The
restoration filter process is performed on the restoration filter
unit based on the one of the plurality of sets of filter
parameters. The video frame is reconstructed based on the filtered
restoration filter unit.
[0033] Aspects of the disclosure also provide non-transitory
computer-readable mediums storing instructions which when executed
by at least one processor cause the at least one processor to
perform any one or a combination of the methods for video
decoding.
BRIEF DESCRIPTION OF THE DRAWINGS
[0034] Further features, the nature, and various advantages of the
disclosed subject matter will be more apparent from the following
detailed description and the accompanying drawings in which:
[0035] FIG. 1A is a schematic illustration of an exemplary subset
of intra prediction modes;
[0036] FIG. 1B is an illustration of exemplary intra prediction
directions;
[0037] FIG. 1C is a schematic illustration of a current block and
its surrounding spatial merge candidates in one example;
[0038] FIG. 2 is a schematic illustration of a simplified block
diagram of a communication system in accordance with an
embodiment;
[0039] FIG. 3 is a schematic illustration of a simplified block
diagram of a communication system in accordance with an
embodiment;
[0040] FIG. 4 is a schematic illustration of a simplified block
diagram of a decoder in accordance with an embodiment;
[0041] FIG. 5 is a schematic illustration of a simplified block
diagram of an encoder in accordance with an embodiment;
[0042] FIG. 6 shows a block diagram of an encoder in accordance
with another embodiment;
[0043] FIG. 7 shows a block diagram of a decoder in accordance with
another embodiment;
[0044] FIG. 8 shows exemplary nominal angles according to an
embodiment of the disclosure;
[0045] FIG. 9 shows positions of top, left, and top-left samples
for one pixel in a current block according to an embodiment of the
disclosure;
[0046] FIG. 10 shows an exemplary recursive filter intra mode
according to an embodiment of the disclosure;
[0047] FIG. 11 shows some exemplary directions in a constrained
directional enhancement filter (CDEF) process according to some
embodiments of the disclosure;
[0048] FIG. 12 shows some exemplary block partitions according to
some embodiments of the disclosure;
[0049] FIG. 13 shows an example in which directionality unit blocks
are merged into filtering units according to an embodiment of the
disclosure;
[0050] FIG. 14 shows another example where directionality unit
blocks are merged into filtering units according to an embodiment
of the disclosure;
[0051] FIG. 15 shows an exemplary mapping of directionality
information between directional intra prediction modes and dominant
directions derived during the CDEF process according to some
embodiments of the disclosure;
[0052] FIG. 16 shows an exemplary flowchart in accordance with an
embodiment of the disclosure; and
[0053] FIG. 17 is a schematic illustration of a computer system in
accordance with an embodiment.
DETAILED DESCRIPTION OF EMBODIMENTS
[0054] I. Video Decoder and Encoder Systems
[0055] FIG. 2 illustrates a simplified block diagram of a
communication system (200) according to an embodiment of the
present disclosure. The communication system (200) includes a
plurality of terminal devices that can communicate with each other,
via, for example, a network (250). For example, the communication
system (200) includes a first pair of terminal devices (210) and
(220) interconnected via the network (250). In the FIG. 2 example,
the first pair of terminal devices (210) and (220) performs
unidirectional transmission of data. For example, the terminal
device (210) may code video data (e.g., a stream of video pictures
that are captured by the terminal device (210)) for transmission to
the other terminal device (220) via the network (250). The encoded
video data can be transmitted in the form of one or more coded
video bitstreams. The terminal device (220) may receive the coded
video data from the network (250), decode the coded video data to
recover the video pictures and display video pictures according to
the recovered video data. Unidirectional data transmission may be
common in media serving applications and the like.
[0056] In another example, the communication system (200) includes
a second pair of terminal devices (230) and (240) that performs
bidirectional transmission of coded video data that may occur, for
example, during videoconferencing. For bidirectional transmission
of data, in an example, each terminal device of the terminal
devices (230) and (240) may code video data (e.g., a stream of
video pictures that are captured by the terminal device) for
transmission to the other terminal device of the terminal devices
(230) and (240) via the network (250). Each terminal device of the
terminal devices (230) and (240) also may receive the coded video
data transmitted by the other terminal device of the terminal
devices (230) and (240), and may decode the coded video data to
recover the video pictures and may display video pictures at an
accessible display device according to the recovered video
data.
[0057] In the FIG. 2 example, the terminal devices (210), (220),
(230) and (240) may be illustrated as servers, personal computers
and smart phones but the principles of the present disclosure may
be not so limited. Embodiments of the present disclosure find
application with laptop computers, tablet computers, media players
and/or dedicated video conferencing equipment. The network (250)
represents any number of networks that convey coded video data
among the terminal devices (210), (220), (230) and (240), including
for example wireline (wired) and/or wireless communication
networks. The communication network (250) may exchange data in
circuit-switched and/or packet-switched channels. Representative
networks include telecommunications networks, local area networks,
wide area networks and/or the Internet. For the purposes of the
present discussion, the architecture and topology of the network
(250) may be immaterial to the operation of the present disclosure
unless explained herein below.
[0058] FIG. 3 illustrates, as an example for an application for the
disclosed subject matter, the placement of a video encoder and a
video decoder in a streaming environment. The disclosed subject
matter can be equally applicable to other video enabled
applications, including, for example, video conferencing, digital
TV, storing of compressed video on digital media including CD, DVD,
memory stick, and the like.
[0059] A streaming system may include a capture subsystem (313)
that can include a video source (301), for example a digital
camera, creating for example a stream of video pictures (302) that
are uncompressed. In an example, the stream of video pictures (302)
includes samples that are taken by the digital camera. The stream
of video pictures (302), depicted as a bold line to emphasize a
high data volume when compared to encoded video data (304) (or
coded video bitstreams), can be processed by an electronic device
(320) that includes a video encoder (303) coupled to the video
source (301). The video encoder (303) can include hardware,
software, or a combination thereof to enable or implement aspects
of the disclosed subject matter as described in more detail below.
The encoded video data (304) (or encoded video bitstream (304)),
depicted as a thin line to emphasize the lower data volume when
compared to the stream of video pictures (302), can be stored on a
streaming server (305) for future use. One or more streaming client
subsystems, such as client subsystems (306) and (308) in FIG. 3 can
access the streaming server (305) to retrieve copies (307) and
(309) of the encoded video data (304). A client subsystem (306) can
include a video decoder (310), for example, in an electronic device
(330). The video decoder (310) decodes the incoming copy (307) of
the encoded video data and creates an outgoing stream of video
pictures (311) that can be rendered on a display (312) (e.g.,
display screen) or other rendering device (not depicted). In some
streaming systems, the encoded video data (304), (307), and (309)
(e.g., video bitstreams) can be encoded according to certain video
coding/compression standards. Examples of those standards include
ITU-T Recommendation H.265. In an example, a video coding standard
under development is informally known as Versatile Video Coding
(VVC). The disclosed subject matter may be used in the context of
VVC.
[0060] It is noted that the electronic devices (320) and (330) can
include other components (not shown). For example, the electronic
device (320) can include a video decoder (not shown) and the
electronic device (330) can include a video encoder (not shown) as
well.
[0061] FIG. 4 shows a block diagram of a video decoder (410)
according to an embodiment of the present disclosure. The video
decoder (410) can be included in an electronic device (430). The
electronic device (430) can include a receiver (431) (e.g.,
receiving circuitry). The video decoder (410) can be used in the
place of the video decoder (310) in the FIG. 3 example.
[0062] The receiver (431) may receive one or more coded video
sequences to be decoded by the video decoder (410); in the same or
another embodiment, one coded video sequence at a time, where the
decoding of each coded video sequence is independent from other
coded video sequences. The coded video sequence may be received
from a channel (401), which may be a hardware/software link to a
storage device which stores the encoded video data. The receiver
(431) may receive the encoded video data with other data, for
example, coded audio data and/or ancillary data streams, that may
be forwarded to their respective using entities (not depicted). The
receiver (431) may separate the coded video sequence from the other
data. To combat network jitter, a buffer memory (415) may be
coupled in between the receiver (431) and an entropy decoder/parser
(420) ("parser (420)" henceforth). In certain applications, the
buffer memory (415) is part of the video decoder (410). In others,
it can be outside of the video decoder (410) (not depicted). In
still others, there can be a buffer memory (not depicted) outside
of the video decoder (410), for example to combat network jitter,
and in addition another buffer memory (415) inside the video
decoder (410), for example to handle playout timing. When the
receiver (431) is receiving data from a store/forward device of
sufficient bandwidth and controllability, or from an isosynchronous
network, the buffer memory (415) may not be needed, or can be
small. For use on best effort packet networks such as the Internet,
the buffer memory (415) may be required, can be comparatively large
and can be advantageously of adaptive size, and may at least
partially be implemented in an operating system or similar elements
(not depicted) outside of the video decoder (410).
[0063] The video decoder (410) may include the parser (420) to
reconstruct symbols (421) from the coded video sequence. Categories
of those symbols include information used to manage operation of
the video decoder (410), and potentially information to control a
rendering device such as a render device (412) (e.g., a display
screen) that is not an integral part of the electronic device (430)
but can be coupled to the electronic device (430), as was shown in
FIG. 4. The control information for the rendering device(s) may be
in the form of Supplemental Enhancement Information (SEI messages)
or Video Usability Information (VUI) parameter set fragments (not
depicted). The parser (420) may parse/entropy-decode the coded
video sequence that is received. The coding of the coded video
sequence can be in accordance with a video coding technology or
standard, and can follow various principles, including variable
length coding, Huffman coding, arithmetic coding with or without
context sensitivity, and so forth. The parser (420) may extract
from the coded video sequence, a set of subgroup parameters for at
least one of the subgroups of pixels in the video decoder, based
upon at least one parameter corresponding to the group. Subgroups
can include Groups of Pictures (GOPs), pictures, tiles, slices,
macroblocks, Coding Units (CUs), blocks, Transform Units (TUs),
Prediction Units (PUs) and so forth. The parser (420) may also
extract from the coded video sequence information such as transform
coefficients, quantizer parameter values, MVs, and so forth.
[0064] The parser (420) may perform an entropy decoding/parsing
operation on the video sequence received from the buffer memory
(415), so as to create symbols (421).
[0065] Reconstruction of the symbols (421) can involve multiple
different units depending on the type of the coded video picture or
parts thereof (such as: inter and intra picture, inter and intra
block), and other factors. Which units are involved, and how, can
be controlled by the subgroup control information that was parsed
from the coded video sequence by the parser (420). The flow of such
subgroup control information between the parser (420) and the
multiple units below is not depicted for clarity.
[0066] Beyond the functional blocks already mentioned, the video
decoder (410) can be conceptually subdivided into a number of
functional units as described below. In a practical implementation
operating under commercial constraints, many of these units
interact closely with each other and can, at least partly, be
integrated into each other. However, for the purpose of describing
the disclosed subject matter, the conceptual subdivision into the
functional units below is appropriate.
[0067] A first unit is the scaler/inverse transform unit (451). The
scaler/inverse transform unit (451) receives a quantized transform
coefficient as well as control information, including which
transform to use, block size, quantization factor, quantization
scaling matrices, etc. as symbol(s) (421) from the parser (420).
The scaler/inverse transform unit (451) can output blocks
comprising sample values that can be input into aggregator
(455).
[0068] In some cases, the output samples of the scaler/inverse
transform (451) can pertain to an intra coded block; that is: a
block that is not using predictive information from previously
reconstructed pictures, but can use predictive information from
previously reconstructed parts of the current picture. Such
predictive information can be provided by an intra picture
prediction unit (452). In some cases, the intra picture prediction
unit (452) generates a block of the same size and shape of the
block under reconstruction, using surrounding already reconstructed
information fetched from the current picture buffer (458). The
current picture buffer (458) buffers, for example, partly
reconstructed current picture and/or fully reconstructed current
picture. The aggregator (455), in some cases, adds, on a per sample
basis, the prediction information that the intra prediction unit
(452) has generated to the output sample information as provided by
the scaler/inverse transform unit (451).
[0069] In other cases, the output samples of the scaler/inverse
transform unit (451) can pertain to an inter coded, and potentially
motion compensated block. In such a case, a motion compensation
prediction unit (453) can access reference picture memory (457) to
fetch samples used for prediction. After motion compensating the
fetched samples in accordance with the symbols (421) pertaining to
the block, these samples can be added by the aggregator (455) to
the output of the scaler/inverse transform unit (451) (in this case
called the residual samples or residual signal) so as to generate
output sample information. The addresses within the reference
picture memory (457) from where the motion compensation prediction
unit (453) fetches prediction samples can be controlled by MVs,
available to the motion compensation prediction unit (453) in the
form of symbols (421) that can have, for example X, Y, and
reference picture components. Motion compensation also can include
interpolation of sample values as fetched from the reference
picture memory (457) when sub-sample exact MVs are in use, MV
prediction mechanisms, and so forth.
[0070] The output samples of the aggregator (455) can be subject to
various loop filtering techniques in the loop filter unit (456).
Video compression technologies can include in-loop filter
technologies that are controlled by parameters included in the
coded video sequence (also referred to as coded video bitstream)
and made available to the loop filter unit (456) as symbols (421)
from the parser (420), but can also be responsive to
meta-information obtained during the decoding of previous (in
decoding order) parts of the coded picture or coded video sequence,
as well as responsive to previously reconstructed and loop-filtered
sample values.
[0071] The output of the loop filter unit (456) can be a sample
stream that can be output to the render device (412) as well as
stored in the reference picture memory (457) for use in future
inter-picture prediction.
[0072] Certain coded pictures, once fully reconstructed, can be
used as reference pictures for future prediction. For example, once
a coded picture corresponding to a current picture is fully
reconstructed and the coded picture has been identified as a
reference picture (by, for example, the parser (420)), the current
picture buffer (458) can become a part of the reference picture
memory (457), and a fresh current picture buffer can be reallocated
before commencing the reconstruction of the following coded
picture.
[0073] The video decoder (410) may perform decoding operations
according to a predetermined video compression technology in a
standard, such as ITU-T Rec. H.265. The coded video sequence may
conform to a syntax specified by the video compression technology
or standard being used, in the sense that the coded video sequence
adheres to both the syntax of the video compression technology or
standard and the profiles as documented in the video compression
technology or standard. Specifically, a profile can select certain
tools as the only tools available for use under that profile from
all the tools available in the video compression technology or
standard. Also necessary for compliance can be that the complexity
of the coded video sequence is within bounds as defined by the
level of the video compression technology or standard. In some
cases, levels restrict the maximum picture size, maximum frame
rate, maximum reconstruction sample rate (measured in, for example
megasamples per second), maximum reference picture size, and so on.
Limits set by levels can, in some cases, be further restricted
through Hypothetical Reference Decoder (HRD) specifications and
metadata for HRD buffer management signaled in the coded video
sequence.
[0074] In an embodiment, the receiver (431) may receive additional
(redundant) data with the encoded video. The additional data may be
included as part of the coded video sequence(s). The additional
data may be used by the video decoder (410) to properly decode the
data and/or to more accurately reconstruct the original video data.
Additional data can be in the form of, for example, temporal,
spatial, or signal noise ratio (SNR) enhancement layers, redundant
slices, redundant pictures, forward error correction codes, and so
on.
[0075] FIG. 5 shows a block diagram of a video encoder (503)
according to an embodiment of the present disclosure. The video
encoder (503) is included in an electronic device (520). The
electronic device (520) includes a transmitter (540) (e.g.,
transmitting circuitry). The video encoder (503) can be used in the
place of the video encoder (303) in the FIG. 3 example.
[0076] The video encoder (503) may receive video samples from a
video source (501) (that is not part of the electronic device (520)
in the FIG. 5 example) that may capture video image(s) to be coded
by the video encoder (503). In another example, the video source
(501) is a part of the electronic device (520).
[0077] The video source (501) may provide the source video sequence
to be coded by the video encoder (503) in the form of a digital
video sample stream that can be of any suitable bit depth (for
example: 8 bit, 10 bit, 12 bit, . . . ), any colorspace (for
example, BT.601 Y CrCB, RGB, . . . ), and any suitable sampling
structure (for example Y CrCb 4:2:0, Y CrCb 4:4:4). In a media
serving system, the video source (501) may be a storage device
storing previously prepared video. In a videoconferencing system,
the video source (501) may be a camera that captures local image
information as a video sequence. Video data may be provided as a
plurality of individual pictures that impart motion when viewed in
sequence. The pictures themselves may be organized as a spatial
array of pixels, wherein each pixel can comprise one or more
samples depending on the sampling structure, color space, etc. in
use. A person skilled in the art can readily understand the
relationship between pixels and samples. The description below
focuses on samples.
[0078] According to an embodiment, the video encoder (503) may code
and compress the pictures of the source video sequence into a coded
video sequence (543) in real time or under any other time
constraints as required by the application. Enforcing appropriate
coding speed is one function of a controller (550). In some
embodiments, the controller (550) controls other functional units
as described below and is functionally coupled to the other
functional units. The coupling is not depicted for clarity.
Parameters set by the controller (550) can include rate control
related parameters (picture skip, quantizer, lambda value of
rate-distortion optimization techniques, . . . ), picture size,
group of pictures (GOP) layout, maximum MV allowed reference area,
and so forth. The controller (550) can be configured to have other
suitable functions that pertain to the video encoder (503)
optimized for a certain system design.
[0079] In some embodiments, the video encoder (503) is configured
to operate in a coding loop. As an oversimplified description, in
an example, the coding loop can include a source coder (530) (e.g.,
responsible for creating symbols, such as a symbol stream, based on
an input picture to be coded, and a reference picture(s)), and a
(local) decoder (533) embedded in the video encoder (503). The
decoder (533) reconstructs the symbols to create the sample data in
a similar manner as a (remote) decoder also would create (as any
compression between symbols and coded video bitstream is lossless
in the video compression technologies considered in the disclosed
subject matter). The reconstructed sample stream (sample data) is
input to the reference picture memory (534). As the decoding of a
symbol stream leads to bit-exact results independent of decoder
location (local or remote), the content in the reference picture
memory (534) is also bit exact between the local encoder and remote
encoder. In other words, the prediction part of an encoder "sees"
as reference picture samples exactly the same sample values as a
decoder would "see" when using prediction during decoding. This
fundamental principle of reference picture synchronicity (and
resulting drift, if synchronicity cannot be maintained, for example
because of channel errors) is used in some related arts as
well.
[0080] The operation of the "local" decoder (533) can be the same
as of a "remote" decoder, such as the video decoder (410), which
has already been described in detail above in conjunction with FIG.
4. Briefly referring also to FIG. 4, however, as symbols are
available and encoding/decoding of symbols to a coded video
sequence by an entropy coder (545) and the parser (420) can be
lossless, the entropy decoding parts of the video decoder (410),
including the buffer memory (415) and the parser (420) may not be
fully implemented in the local decoder (533).
[0081] An observation that can be made at this point is that any
decoder technology except the parsing/entropy decoding that is
present in a decoder also necessarily needs to be present, in
substantially identical functional form, in a corresponding
encoder. For this reason, the disclosed subject matter focuses on
decoder operation. The description of encoder technologies can be
abbreviated as they are the inverse of the comprehensively
described decoder technologies. Only in certain areas a more detail
description is required and provided below.
[0082] During operation, in some examples, the source coder (530)
may perform motion compensated predictive coding, which codes an
input picture predictively with reference to one or more
previously-coded picture from the video sequence that were
designated as "reference pictures". In this manner, the coding
engine (532) codes differences between pixel blocks of an input
picture and pixel blocks of reference picture(s) that may be
selected as prediction reference(s) to the input picture.
[0083] The local video decoder (533) may decode coded video data of
pictures that may be designated as reference pictures, based on
symbols created by the source coder (530). Operations of the coding
engine (532) may advantageously be lossy processes. When the coded
video data may be decoded at a video decoder (not shown in FIG. 5),
the reconstructed video sequence typically may be a replica of the
source video sequence with some errors. The local video decoder
(533) replicates decoding processes that may be performed by the
video decoder on reference pictures and may cause reconstructed
reference pictures to be stored in the reference picture cache
(534). In this manner, the video encoder (503) may store copies of
reconstructed reference pictures locally that have common content
as the reconstructed reference pictures that will be obtained by a
far-end video decoder (absent transmission errors).
[0084] The predictor (535) may perform prediction searches for the
coding engine (532). That is, for a new picture to be coded, the
predictor (535) may search the reference picture memory (534) for
sample data (as candidate reference pixel blocks) or certain
metadata such as reference picture MVs, block shapes, and so on,
that may serve as an appropriate prediction reference for the new
pictures. The predictor (535) may operate on a sample
block-by-pixel block basis to find appropriate prediction
references. In some cases, as determined by search results obtained
by the predictor (535), an input picture may have prediction
references drawn from multiple reference pictures stored in the
reference picture memory (534).
[0085] The controller (550) may manage coding operations of the
source coder (530), including, for example, setting of parameters
and subgroup parameters used for encoding the video data.
[0086] Output of all aforementioned functional units may be
subjected to entropy coding in the entropy coder (545). The entropy
coder (545) translates the symbols as generated by the various
functional units into a coded video sequence, by lossless
compressing the symbols according to technologies such as Huffman
coding, variable length coding, arithmetic coding, and so
forth.
[0087] The transmitter (540) may buffer the coded video sequence(s)
as created by the entropy coder (545) to prepare for transmission
via a communication channel (560), which may be a hardware/software
link to a storage device which would store the encoded video data.
The transmitter (540) may merge coded video data from the video
coder (503) with other data to be transmitted, for example, coded
audio data and/or ancillary data streams (sources not shown).
[0088] The controller (550) may manage operation of the video
encoder (503). During coding, the controller (550) may assign to
each coded picture a certain coded picture type, which may affect
the coding techniques that may be applied to the respective
picture. For example, pictures often may be assigned as one of the
following picture types:
[0089] An Intra Picture (I picture) may be one that may be coded
and decoded without using any other picture in the sequence as a
source of prediction. Some video codecs allow for different types
of intra pictures, including, for example Independent Decoder
Refresh ("IDR") Pictures. A person skilled in the art is aware of
those variants of I pictures and their respective applications and
features.
[0090] A predictive picture (P picture) may be one that may be
coded and decoded using intra prediction or inter prediction using
at most one MV and reference index to predict the sample values of
each block.
[0091] A bi-directionally predictive picture (B Picture) may be one
that may be coded and decoded using intra prediction or inter
prediction using at most two MVs and reference indices to predict
the sample values of each block. Similarly, multiple-predictive
pictures can use more than two reference pictures and associated
metadata for the reconstruction of a single block.
[0092] Source pictures commonly may be subdivided spatially into a
plurality of sample blocks (for example, blocks of 4.times.4,
8.times.8, 4.times.8, or 16.times.16 samples each) and coded on a
block-by-block basis. Blocks may be coded predictively with
reference to other (already coded) blocks as determined by the
coding assignment applied to the blocks' respective pictures. For
example, blocks of I pictures may be coded non-predictively or they
may be coded predictively with reference to already coded blocks of
the same picture (spatial prediction or intra prediction). Pixel
blocks of P pictures may be coded predictively, via spatial
prediction or via temporal prediction with reference to one
previously coded reference picture. Blocks of B pictures may be
coded predictively, via spatial prediction or via temporal
prediction with reference to one or two previously coded reference
pictures.
[0093] The video encoder (503) may perform coding operations
according to a predetermined video coding technology or standard,
such as ITU-T Rec. H.265. In its operation, the video encoder (503)
may perform various compression operations, including predictive
coding operations that exploit temporal and spatial redundancies in
the input video sequence. The coded video data, therefore, may
conform to a syntax specified by the video coding technology or
standard being used.
[0094] In an embodiment, the transmitter (540) may transmit
additional data with the encoded video. The source coder (530) may
include such data as part of the coded video sequence. Additional
data may comprise temporal/spatial/SNR enhancement layers, other
forms of redundant data such as redundant pictures and slices, SEI
messages, VUI parameter set fragments, and so on.
[0095] A video may be captured as a plurality of source pictures
(video pictures) in a temporal sequence. Intra-picture prediction
(often abbreviated to intra prediction) makes use of spatial
correlation in a given picture, and inter-picture prediction makes
uses of the (temporal or other) correlation between the pictures.
In an example, a specific picture under encoding/decoding, which is
referred to as a current picture, is partitioned into blocks. When
a block in the current picture is similar to a reference block in a
previously coded and still buffered reference picture in the video,
the block in the current picture can be coded by a vector that is
referred to as a MV. The MV points to the reference block in the
reference picture, and can have a third dimension identifying the
reference picture, in case multiple reference pictures are in
use.
[0096] In some embodiments, a bi-prediction technique can be used
in the inter-picture prediction. According to the bi-prediction
technique, two reference pictures, such as a first reference
picture and a second reference picture that are both prior in
decoding order to the current picture in the video (but may be in
the past and future, respectively, in display order) are used. A
block in the current picture can be coded by a first MV that points
to a first reference block in the first reference picture, and a
second MV that points to a second reference block in the second
reference picture. The block can be predicted by a combination of
the first reference block and the second reference block.
[0097] Further, a merge mode technique can be used in the
inter-picture prediction to improve coding efficiency.
[0098] According to some embodiments of the disclosure,
predictions, such as inter-picture predictions and intra-picture
predictions are performed in the unit of blocks. For example,
according to the HEVC standard, a picture in a sequence of video
pictures is partitioned into coding tree units (CTU) for
compression, the CTUs in a picture have the same size, such as
64.times.64 pixels, 32.times.32 pixels, or 16.times.16 pixels. In
general, a CTU includes three coding tree blocks (CTBs), which are
one luma CTB and two chroma CTBs. Each CTU can be recursively
quad-tree split into one or multiple coding units (CUs). For
example, a CTU of 64.times.64 pixels can be split into one CU of
64.times.64 pixels, or 4 CUs of 32.times.32 pixels, or 16 CUs of
16.times.16 pixels. In an example, each CU is analyzed to determine
a prediction type for the CU, such as an inter prediction type or
an intra prediction type. The CU is split into one or more
prediction units (PUs) depending on the temporal and/or spatial
predictability. Generally, each PU includes a luma prediction block
(PB), and two chroma PBs. In an embodiment, a prediction operation
in coding (encoding/decoding) is performed in the unit of a
prediction block. Using a luma prediction block as an example of a
prediction block, the prediction block includes a matrix of values
(e.g., luma values) for pixels, such as 8.times.8 pixels,
16.times.16 pixels, 8.times.16 pixels, 16.times.8 pixels, and the
like.
[0099] FIG. 6 shows a diagram of a video encoder (603) according to
another embodiment of the disclosure. The video encoder (603) is
configured to receive a processing block (e.g., a prediction block)
of sample values within a current video picture in a sequence of
video pictures, and encode the processing block into a coded
picture that is part of a coded video sequence. In an example, the
video encoder (603) is used in the place of the video encoder (303)
in the FIG. 3 example.
[0100] In an HEVC example, the video encoder (603) receives a
matrix of sample values for a processing block, such as a
prediction block of 8.times.8 samples, and the like. The video
encoder (603) determines whether the processing block is best coded
using intra mode, inter mode, or bi-prediction mode using, for
example, rate-distortion optimization. When the processing block is
to be coded in intra mode, the video encoder (603) may use an intra
prediction technique to encode the processing block into the coded
picture; and when the processing block is to be coded in inter mode
or bi-prediction mode, the video encoder (603) may use an inter
prediction or bi-prediction technique, respectively, to encode the
processing block into the coded picture. In certain video coding
technologies, merge mode can be an inter picture prediction submode
where the MV is derived from one or more MV predictors without the
benefit of a coded MV component outside the predictors. In certain
other video coding technologies, a MV component applicable to the
subject block may be present. In an example, the video encoder
(603) includes other components, such as a mode decision module
(not shown) to determine the mode of the processing blocks.
[0101] In the FIG. 6 example, the video encoder (603) includes the
inter encoder (630), an intra encoder (622), a residue calculator
(623), a switch (626), a residue encoder (624), a general
controller (621), and an entropy encoder (625) coupled together as
shown in FIG. 6.
[0102] The inter encoder (630) is configured to receive the samples
of the current block (e.g., a processing block), compare the block
to one or more reference blocks in reference pictures (e.g., blocks
in previous pictures and later pictures), generate inter prediction
information (e.g., description of redundant information according
to inter encoding technique, MVs, merge mode information), and
calculate inter prediction results (e.g., prediction block) based
on the inter prediction information using any suitable technique.
In some examples, the reference pictures are decoded reference
pictures that are decoded based on the encoded video
information.
[0103] The intra encoder (622) is configured to receive the samples
of the current block (e.g., a processing block), in some cases
compare the block to blocks already coded in the same picture,
generate quantized coefficients after transform, and in some cases
also intra prediction information (e.g., an intra prediction
direction information according to one or more intra encoding
techniques). In an example, the intra encoder (622) also calculates
intra prediction results (e.g., prediction block) based on the
intra prediction information and reference blocks in the same
picture.
[0104] The general controller (621) is configured to determine
general control data and control other components of the video
encoder (603) based on the general control data. In an example, the
general controller (621) determines the mode of the block, and
provides a control signal to the switch (626) based on the mode.
For example, when the mode is the intra mode, the general
controller (621) controls the switch (626) to select the intra mode
result for use by the residue calculator (623), and controls the
entropy encoder (625) to select the intra prediction information
and include the intra prediction information in the bitstream; and
when the mode is the inter mode, the general controller (621)
controls the switch (626) to select the inter prediction result for
use by the residue calculator (623), and controls the entropy
encoder (625) to select the inter prediction information and
include the inter prediction information in the bitstream.
[0105] The residue calculator (623) is configured to calculate a
difference (residue data) between the received block and prediction
results selected from the intra encoder (622) or the inter encoder
(630). The residue encoder (624) is configured to operate based on
the residue data to encode the residue data to generate the
transform coefficients. In an example, the residue encoder (624) is
configured to convert the residue data from a spatial domain to a
frequency domain, and generate the transform coefficients. The
transform coefficients are then subject to quantization processing
to obtain quantized transform coefficients. In various embodiments,
the video encoder (603) also includes a residue decoder (628). The
residue decoder (628) is configured to perform inverse-transform,
and generate the decoded residue data. The decoded residue data can
be suitably used by the intra encoder (622) and the inter encoder
(630). For example, the inter encoder (630) can generate decoded
blocks based on the decoded residue data and inter prediction
information, and the intra encoder (622) can generate decoded
blocks based on the decoded residue data and the intra prediction
information. The decoded blocks are suitably processed to generate
decoded pictures and the decoded pictures can be buffered in a
memory circuit (not shown) and used as reference pictures in some
examples.
[0106] The entropy encoder (625) is configured to format the
bitstream to include the encoded block. The entropy encoder (625)
is configured to include various information according to a
suitable standard such as HEVC. In an example, the entropy encoder
(625) is configured to include the general control data, the
selected prediction information (e.g., intra prediction information
or inter prediction information), the residue information, and
other suitable information in the bitstream. Note that, according
to the disclosed subject matter, when coding a block in the merge
submode of either inter mode or bi-prediction mode, there is no
residue information.
[0107] FIG. 7 shows a diagram of a video decoder (710) according to
another embodiment of the disclosure. The video decoder (710) is
configured to receive coded pictures that are part of a coded video
sequence, and decode the coded pictures to generate reconstructed
pictures. In an example, the video decoder (710) is used in the
place of the video decoder (310) in the FIG. 3 example.
[0108] In the FIG. 7 example, the video decoder (710) includes an
entropy decoder (771), an inter decoder (780), a residue decoder
(773), a reconstruction module (774), and an intra decoder (772)
coupled together as shown in FIG. 7.
[0109] The entropy decoder (771) can be configured to reconstruct,
from the coded picture, certain symbols that represent the syntax
elements of which the coded picture is made up. Such symbols can
include, for example, the mode in which a block is coded (such as,
for example, intra mode, inter mode, bi-predicted mode, the latter
two in merge submode or another submode), prediction information
(such as, for example, intra prediction information or inter
prediction information) that can identify certain sample or
metadata that is used for prediction by the intra decoder (772) or
the inter decoder (780), respectively, residual information in the
form of, for example, quantized transform coefficients, and the
like. In an example, when the prediction mode is inter or
bi-predicted mode, the inter prediction information is provided to
the inter decoder (780); and when the prediction type is the intra
prediction type, the intra prediction information is provided to
the intra decoder (772). The residual information can be subject to
inverse quantization and is provided to the residue decoder
(773).
[0110] The inter decoder (780) is configured to receive the inter
prediction information, and generate inter prediction results based
on the inter prediction information.
[0111] The intra decoder (772) is configured to receive the intra
prediction information, and generate prediction results based on
the intra prediction information.
[0112] The residue decoder (773) is configured to perform inverse
quantization to extract de-quantized transform coefficients, and
process the de-quantized transform coefficients to convert the
residual from the frequency domain to the spatial domain. The
residue decoder (773) may also require certain control information
(to include the Quantizer Parameter (QP)), and that information may
be provided by the entropy decoder (771) (data path not depicted as
this may be low volume control information only).
[0113] The reconstruction module (774) is configured to combine, in
the spatial domain, the residual as output by the residue decoder
(773) and the prediction results (as output by the inter or intra
prediction modules as the case may be) to form a reconstructed
block, that may be part of the reconstructed picture, which in turn
may be part of the reconstructed video. It is noted that other
suitable operations, such as a deblocking operation and the like,
can be performed to improve the visual quality.
[0114] It is noted that the video encoders (303), (503), and (603),
and the video decoders (310), (410), and (710) can be implemented
using any suitable technique. In an embodiment, the video encoders
(303), (503), and (603), and the video decoders (310), (410), and
(710) can be implemented using one or more integrated circuits. In
another embodiment, the video encoders (303), (503), and (603), and
the video decoders (310), (410), and (710) can be implemented using
one or more processors that execute software instructions.
[0115] II. Intra Prediction
[0116] In some related examples such as VP9, 8 directional modes
are supported, which correspond to angles from 45 to 207 degrees.
To exploit more varieties of spatial redundancy in directional
textures, in some related examples such as AOMedia Video 1 (AV1),
directional intra modes are extended to an angle set with a finer
granularity. The original 8 angles are slightly changed and
referred to as nominal angles, and these 8 nominal angles are named
as V_PRED, H_PRED, D45_PRED, D135_PRED, D113_PRED, D157_PRED,
D203_PRED, and D67_PRED.
[0117] FIG. 8 shows exemplary nominal angles according to an
embodiment of the disclosure. Each nominal angle can be associated
with 7 finer angles, so there can be 56 directional angles in
total, in some related examples such as AV1. A prediction angle is
represented by a nominal intra angle plus an angle delta, which is
derived by multiplying a factor (with a range of -3.about.3) with a
step size of 3 degrees. To implement directional prediction modes
in AV1 through a generic way, all the 56 directional intra
prediction angles in AV1 can be implemented with a unified
directional predictor that projects each pixel to a reference
sub-pixel location and interpolates the reference sub-pixel by a
2-tap bilinear filter.
[0118] In some related examples such as AV1, there are 5
non-directional smooth intra prediction modes, which are DC, PAETH,
SMOOTH, SMOOTH_V, and SMOOTH_H. For DC prediction, an average of
left and above neighboring samples is used as a predictor of a
block to be predicted. For PAETH prediction, top, left, and
top-left reference samples are firstly fetched, and then a value
which is closest to (top+left-top-left) is set as a predictor for a
pixel to be predicted.
[0119] FIG. 9 shows positions of top, left, and top-left samples
for one pixel in a current block according to an embodiment of the
disclosure. For SMOOTH, SMOOTH_V, and SMOOTH_H modes, a block is
predicted using quadratic interpolation in vertical or horizontal
directions, or an average of both directions.
[0120] FIG. 10 shows an exemplary recursive filter intra mode
according to an embodiment of the disclosure.
[0121] To capture decaying spatial correlation with references on
the edges, FILTER INTRA modes are designed for luma blocks. Five
filter intra modes are defined in AV1, each represented by a set of
eight 7-tap filters reflecting a correlation between pixels in a
4.times.2 patch and 7 neighbors adjacent to the patch. For example,
weighting factors of a 7-tap filter are position dependent. As
shown in FIG. 10, an 8.times.8 block is split into eight 4.times.2
patches, which are indicated by B0, B1, B2, B3, B4, B5, B6, and B7.
For each patch, its 7 neighbors, indicated by R0.about.R6, are used
to predict pixels in the respective patch. For patch B0, all the
neighbors are already reconstructed. But for other patches, when
not all the neighbors are reconstructed, the predicted values of
immediate neighbors are used as reference values. For example, all
the neighbors of patch B7 are not reconstructed, so the prediction
samples of the neighbors (i.e., B5 and B6) of patch B7 are used
instead.
[0122] For a chroma component, a chroma-only intra prediction mode,
referred to as chroma from luma (CfL) mode, models chroma pixels as
a linear function of coincident reconstructed luma pixels. The CfL
prediction can be expressed as follows:
CfL(.alpha.)=.alpha..times.L.sub.AC+DC Eq. (1)
where L.sub.AC denotes an AC contribution of the luma component, a
denotes a parameter of the linear model, and DC denotes a DC
contribution of the chroma component. In an example, the
reconstructed luma pixels are subsampled into a chroma resolution,
and then an average value is subtracted to form the AC
contribution. To approximate the chroma AC component from the AC
contribution, instead of requiring a decoder to calculate scaling
parameters as in some related examples, the CfL mode in AC1
determines the parameter a based on original chroma pixels and
signals them in a bitstream. This reduces decoder complexity and
yields more precise predictions. As for the DC contribution of the
chroma component, it is computed using intra DC mode, which is
sufficient for most chroma content and has mature fast
implementations.
[0123] III. Loop Filtering
[0124] In some related examples such as AV1, three in-loop filters
can be applied on reconstructed frames in an order of deblocking,
constrained directional enhancement filter (CDEF), and
loop-restoration filter. The loop-restoration filter includes
Wiener filter and self-guided projection (SGRPRJ) filter, one of
which can be adaptively chosen.
[0125] The deblocking can be applied across a transform block
boundary in order to reduce blocky artifacts caused by quantization
of transform coefficients. In some examples, 4, 8, and 14 tap
finite impulse response (FIR) filters can be used for luma blocks
and 4 and 6 tap FIR filters can be used for chroma blocks.
[0126] A filter length is initially determined by a minimum
transform block size at the boundary. A conditional check can be
performed using a variance to avoid blurring the actual edge. In
addition, a flatness check can also be incorporated to finally
determine the filter length.
[0127] The CDEF is a non-linear de-ringing filter applied along a
detected directional feature in a target area. In some related
examples, an 8.times.8 area is a unit size on which the CDEF is
performed. A normative direction-detection can be performed as
illustrated in FIG. 11, which shows some exemplary directions in
the CDEF process.
[0128] In FIG. 11, a candidate direction d (0.about.7) that
minimizes a following quantity can be chosen as a dominant
direction.
E.sub.d.sup.2=.SIGMA..sub.k[.SIGMA..sub.p.di-elect
cons.P.sub.d,k(x.sub.p-.mu.d,k).sup.2] Eq. (2)
where x.sub.p is a value of a pixel p, P.sub.d,k are pixels in a
line k following a direction d, and .mu..sub.d,k is a mean value of
P.sub.d,k obtained as follows:
.mu. d , k = 1 P d , k .times. .SIGMA. p .di-elect cons. P d , k
.times. x p Eq . .times. ( 3 ) ##EQU00001##
[0129] A sum of all sample values within a block is a constant.
Thus, minimizing Eq. 2 corresponds to maximizing the following
equation:
s d = .SIGMA. k .times. 1 N d , k .times. ( .SIGMA. p .di-elect
cons. P d , k .times. x p ) 2 Eq . .times. ( 4 ) ##EQU00002##
[0130] Once the dominant direction is determined, the following
filter operation can be performed. A primary filter is performed
along the selected dominant direction while a secondary filter is
performed along a secondary direction that is 45 degrees off the
dominant (or primary) direction.
{circumflex over
(p)}(x,y)=p(x,y)+.SIGMA..sub.m,nw.sub.d,m,n.sup.pf(p(m,n)-p(x,y),S.sup.p,-
D)+.SIGMA..sub.m,nw.sub.d,m,n.sup.sf(p(m,n)-p(x,y),S.sup.s,D) Eq.
(5)
where w.sup.p and w.sup.s are fixed filter coefficients for primary
and the secondary filters, respectively, and a piecewise linear
function f is given as:
f .function. ( diff , S , D ) = { min .times. ( diff , max
.function. ( 0 , S - diff 2 D - log S .times. S ) ) .times. , if
.times. .times. diff > 0 max .times. ( diff , min .function. ( 0
, diff 2 D - log 2 .times. S ) ) - S , .times. otherwise Eq .
.times. ( 6 ) ##EQU00003##
with S and D representing strength and damping values,
respectively, and up to 8 presets of (S, D) values for luma/chroma
are signaled per frame.
[0131] When applying the filtering, each 64.times.64 block within a
tile can choose one of the presets and the filtering can be
performed on each 8.times.8 unit in the 64.times.64 respective
block.
[0132] It is noted that during a decoding process, several
variables related to signal characteristics of a local area can be
parsed from a bitstream or derived. These variables include dir,
var, priStr, secStr, and damping. The variable dir represents a
dominant edge direction of an 8.times.8 block. The variable var
represents a variance of signal values inside an 8.times.8 block
and is defined as a normalized difference between a cost of a
dominant direction and a cost along a direction that is orthogonal
to the dominant direction. The variable priStr represents a primary
filter strength S.sup.p of a 64.times.64 block containing 8.times.8
filtering units. The variable secStr represents a secondary filter
strength S.sup.s of a 64.times.64 block containing 8.times.8
filtering units. The variable damping represents a damping
parameter D of a 64.times.64 block containing 8.times.8 filtering
units. These values can be obtained separately for luma and chroma
channels.
[0133] After the deblocking and CDEF processes are performed, a
mutually exclusive application of two types of restoration filter
can be performed in some related examples such as AV1. The two
types of restoration filter include Wiener filter and SGRPRJ
filter. A square-shaped loop-restoration unit (LRU) size can be
chosen from 64.times.64 to 256.times.256.
[0134] In the Wiener filter, the quality of each reconstructed
pixel in a coded frame can be improved through a non-causal
filtering with neighboring pixels within a W.times.W window around
the respective pixel. The 2D filter taps of the Wiener filter can
be denoted by F and determined as:
F=H.sup.-1M Eq. (7)
where H=E[XX.sup.T] is an auto-covariance of x, which includes
column-vectorized W.sup.2 samples in the W.times.W windows, and M=E
[YX.sup.T] is a cross-correlation of x with an original source
sample y.
[0135] In some related examples such as AV1, a separability of F
and the symmetry and normalization of filter coefficients of the
Wiener filter can be imposed as constraints. The Wiener filtering
coefficients F (formed as a W.sup.2.times.1 vector) can be defined
as:
F=column_vectorize[ab.sup.T]Eq. (8)
where a and b are W.times.1 vertical and horizontal filters such
that a(i)=a(W-1-i), b(i)=b(W-1-i), for i=0, 1, . . . , r-1, and
.SIGMA.a(i)=.SIGMA.b(i)=1. The coefficients vectors a and b can be
searched at an encoder and coded in a bitstream
[0136] In SGRPRJ filtering, a simple linear filtering described by
a following model is performed to obtain a simple restored version
2 from a degraded image x as follows:
{circumflex over (x)}=Fx+G Eq. (9)
where F and G can be obtained using a guide image and a degraded
image. In some related examples such as AV1, a form of
self-guidance filtering is adopted where no separate guide image is
used, but only the statistics of a degraded image itself are used
to determine F and G.
[0137] More specifically, a local mean (.mu.) and variance
(.sigma..sup.2) of pixels within a (2r+1).times.(2r+1) window
around a pixel can be calculated and every pixel x can be filtered
as follows:
x ^ = .sigma. 2 .sigma. 2 + e .times. x + e .sigma. 2 + e .times.
.mu. Eq . .times. ( 10 ) ##EQU00004##
where r specifies a search window size and e is a noise parameter
controlling the de-noising strength.
[0138] Given two simple reconstructions X.sub.1 and X.sub.2 from a
degraded image X by Eq. (9), the following subspace projection is
performed to construct the final output X.sub.r,
X.sub.r=X+.alpha.(X.sub.1-X)+.beta.(X.sub.2-X) Eq. (11)
[0139] With X, X.sub.1, X.sub.2, and the source Y, an encoder can
compute .alpha. and .beta. as follows:
[.alpha./.beta.].sup.T=(A.sup.TA).sup.-1A.sup.Tb Eq. (12)
where A={X.sub.1-X, X.sub.2-X}, and b=Y-X.
[0140] Then, the encoder can send a 6-tuple (r.sub.1, e.sub.1
r.sub.2, e.sub.2, .alpha., .beta.) for each LRU.
[0141] IV. Feature Information Reuse for Enhanced Restoration
Filtering
[0142] In some related examples such as AV1, a Wiener filter can be
performed with a unit of square-size from 64.times.64 to
256.times.256 by uniformly dividing a frame/tile into LRUs. In an
example, filter coefficients of the Wiener filter can be obtained
by assuming the signal statistics are stationary. Therefore, it is
desirable to classify a target area of filtering into one of
categorized statistical types where the stationarity assumption can
reasonably hold. Possible approaches of classifying the target area
include using quantities such as local variances or edge
information. While these quantities themselves or related category
information can be calculated at a decoder or signaled in a
bitstream, it can be costly in terms of computation or bitrate
overhead.
[0143] In some related examples such as AV1, an SGRPRJ filter can
be performed with a unit of square-size from 64.times.64 to
256.times.256 by uniformly dividing a frame/tile into LRUs. In the
SGRPRJ filter, a form of simple edge-preserving filtering is
performed to construct simple restored images using a fixed radius
and noise parameter pair for each LRU. Furthermore, fixed
projection parameters .alpha. and .beta. can be used for each LRU
as weighting factors for error images to form a final
reconstruction. However, regions in the error images can have
different statistical characteristics reflecting local signal
features such as edges and textures. Therefore, if a single set of
SGRPRJ filter parameters such as radius, noise-parameter, a, and f
are to be used or estimated over an LRU region covering pixels with
widely varying signal statistics, the quality of the final
reconstruction can be compromised. On the other hand, incorporating
signal classification for more adaptivity can essentially pose a
same challenge as the Wiener filtering case in terms of the extra
cost of computational burden or bitrate overhead.
[0144] This disclosure includes methods of enhancing the
performance of the restoration filtering techniques by reusing
feature information such as directionality information derived from
a CDEF process and/or an intra prediction mode. For example, an
adaptive restoration filtering technique can effectively reuse
signal feature and statistics information already available at a
decoder.
[0145] In the disclosure, a restoration filter (or filtering)
process can be defined as a filtering process used for an operation
of a noisy image and estimating a clean and original image based on
the noisy image. The restoration filter process can include a
process used for blurring an image or a reverse process used for an
inverse of blurring the image. Examples of the restoration filter
process include, but are not limited to, Wiener and SGRPRJ
filtering processes. A restoration filter (or filtering) unit is a
region where the restoration filter process is performed.
[0146] In the disclosure, a directionality information unit can be
defined as a group of pixels with designated shapes and sizes and
provide a dominant directionality of features represented by pixel
values of the group of pixels. In an example such as AV1, each
directionality information unit in a CDEF process can be an
8.times.8 block. A dominant direction and variance values of pixels
in each 8.times.8 block can be derived in a normative way. In
another example, directional intra prediction modes in AV1 can
provide such information with units of varying shapes and sizes
corresponding to intra prediction blocks.
[0147] According to aspects of the disclosure, directionality
information derived at a decoder can be reused to infer the
presence and directionality of boundary edges for a restoration
filter, such as a Wiener filter or an SGRPRJ filter. For example,
the directionality information can be derived from a CDEF
process.
[0148] According to some embodiments, a shape and a size of a
restoration filtering unit (e.g., Wiener or SGRPRJ filtering unit)
can be defined using a multiple of available directionality
information units (e.g., 8.times.8 blocks used in CDEF
direction-detection and filtering in AV1). In such a way, a finer
granularity of the directional adaptivity can be realized.
Accordingly, a restoration filtering process can be performed with
a smaller unit size than one of fixed square types of 64.times.64,
128.times.128, or 256.times.256 used in some related examples such
as AV1.
[0149] In one embodiment, a size of a restoration filtering unit
can be the same as an LRU size for example as defined in AV1.
[0150] In one embodiment, a size of a restoration filtering unit
can be the same as a size of a directionality information unit
(e.g., 8.times.8).
[0151] In one embodiment, a restoration filtering unit can be
further split from a given LRU size into a square, rectangular
shape, T-shape, or 4-way sub-LRUs similarly to or in alignment with
partitions in some related examples such as the partitions in AV1,
as shown in FIG. 12.
[0152] In one embodiment, blocks each with a size of a
directionality information unit (e.g., 8.times.8) can be merged by
following various scanning orders to form a filtering unit. FIG. 13
shows an example where blocks each with a size of 8.times.8
directionality unit are merged into filtering units each with a
size of 32.times.8 in a raster-scanning order. For example, four
8.times.8 directionality unit blocks (1301)-(1304) can be merged
into a 32.times.8 filtering unit block (1310), and four 8.times.8
directionality unit blocks (1305)-(1308) can be merged into a
32.times.8 filtering unit block (1320).
[0153] In one embodiment, blocks each with a size of a
directionality information unit (e.g., 8.times.8) and with similar
directionalities can be merged by following various scanning orders
to form a filtering unit. FIG. 14 shows an example where blocks
each with a size of 8.times.8 directionality unit are merged into
variable-sized filtering units in a raster-scanning order. The
sizes of the filtering units include 8.times.8, 16.times.8, and
32.times.8. For example, one 8.times.8 directionality unit block
(1401) can be an 8.times.8 filtering unit block (1410), two
8.times.8 directionality unit blocks (1403)-(1404) can be merged
into a 16.times.8 filtering unit block (1420), and four 8.times.8
directionality unit blocks (1405)-(1408) can be merged into a
32.times.8 filtering unit block (1430).
[0154] According to some embodiments, each available directionality
of a CDEF process can be directly used as a class index for a
signal class, for which a unique set of restoration filter shapes
and sizes can be defined. That is, a selection of a restoration
filter can depend on the available directionalities of the CDEF
process.
[0155] In one embodiment, a solution to an equation in the
computation of a restoration filter such as Eq. (7) can be applied
per signal class.
[0156] In one embodiment, different shapes of 2D filters with
different numbers of filter-taps, with or without symmetry, can be
used per signal class.
[0157] In one embodiment, the separability (separable or
non-separable filter) of 2D filters can depend on the available
directionalities of the CDEF process.
[0158] In one embodiment, multiple directions out of available
directions of a CDEF process can be merged into a single class,
resulting in a reduced number of directionality classes. A unique
set of restoration filter shapes and sizes can be defined for each
merged class.
[0159] According to some embodiments, in addition to the
directionality, block variance information can be combined to
further refine the directionality-based classes.
[0160] In one embodiment, a directionality information unit with a
size of 8.times.8 in a CDEF process can be further classified into
different sub-classes. The classification may be based on variance
values of the directionality information unit. For example, if a
number of classes based on directionality is 5 and a number of
classes based on variance is 3, then there can be 15 signal
classes, for each of which a set of restoration filter (e.g.,
Wiener or SGRPRJ) can be designed.
[0161] According to some embodiments, in addition to the
directionality, filter strengths can be combined to determine
signal classes of a restoration filtering unit. For example,
signaled primary and secondary CDEF filter strengths can be
combined to determine signal classes of a restoration filtering
unit. Different filter strength presets chosen by an encoder can
indicate different signal characteristics of a target area.
[0162] In one embodiment, one of the presets of primary and
secondary filter strengths signaled in a bitstream can be directly
used as another dimension of a signal class index. For example, if
a number of classes based on directionality is 5 and a number of
classes based on the presets of filter strengths is 4, then there
can be 20 signal classes, for each of which a set of restoration
filter (e.g., Wiener or SGRPRJ) can be designed.
[0163] According to some embodiments, a majority vote or
consistency check of directionalities included in a restoration
filtering unit can be performed to determine a filter class.
[0164] In one embodiment, when a size and a shape of a restoration
filtering unit are fixed and a number of directionality information
units (e.g. 8.times.8 in the case of CDEF) included in the
restoration filtering unit is more than a predefined number, a
majority vote or consistency check of directionalities included in
the restoration filtering unit can be performed to determine a
filter class.
[0165] In one embodiment, for the majority vote case, a most
frequent directionality among available and possibly merged
directionalities can be chosen. In an example, a certain margin can
be set between the first and second most frequent
directionalities.
[0166] In one embodiment, before taking the majority vote, it is
determined whether a number of classes inside a restoration
filtering unit is more than a predefined number. If it is true,
inconsistency can be declared and either an explicit signaling for
the restoration filtering unit or use of a smaller restoration
filtering unit can be chosen.
[0167] According to some embodiments, the same manner and
directionality information used for a restoration filtering unit in
a luma component can used for a restoration filtering unit in a
chroma component when such information is available only for the
luma component from a CDEF process.
[0168] According to some embodiments, a restoration filtering unit
in a chroma component can use its own filter strengths, such as
preset values of primary and secondary filter strengths included in
the chroma component when such information is available for chroma
components.
[0169] According to some embodiments, a restoration filtering unit
in a chroma component can use its own variance information when
such information is available for chroma components.
[0170] In one embodiment, when a CDEF process is turned off and a
restoration filter is turned on, a direction search process of the
CDEF process as described in section III (loop filtering section)
can be applied to derive the presence and the directionality of the
boundary edges for the restoration filtering unit.
[0171] In one embodiment, when a CDEF process is turned off and a
restoration filter is turned on, a default signal class is chosen
or an explicit signaling of filter class index can be
performed.
[0172] According to aspects of the disclosure, directionality
information indicated by intra-prediction modes that are available
at a decoder can be reused in a restoration filtering unit (e.g.,
Wiener or SGRPRJ filtering unit) as a guide to label signal
classes, for each of which a unique set of the restoration filter
shapes and sizes can be defined. In some embodiments, such
directional information can be provided with varying unit sizes by
an encoder for intra-prediction.
[0173] According to some embodiments, a shape and a size of a
restoration filtering unit can be defined using a multiple of
available directionality information units (e.g., 8.times.8 for a
directional intra prediction unit in AV1). In such a way, finer
granularity of the directional adaptivity can be realized.
Accordingly, a restoration filtering process can be performed with
a smaller unit size than one of fixed square types of 64.times.64,
128.times.128, or 256.times.256 used in some related examples such
as AV1.
[0174] In one embodiment, a restoration filtering unit size can be
the same as an LRU size for example defined in AV1.
[0175] In one embodiment, a restoration filtering unit size can be
the same as that of the directionality information unit (e.g.,
8.times.8).
[0176] In one embodiment, a restoration filtering unit can be
further split from a given LRU size into a square, rectangular
shape, T-shape, or 4-way sub-LRUs similarly to or in alignment with
partitions in some related examples such as the partitions in AV1,
as shown in FIG. 12.
[0177] In one embodiment, blocks each with a size of a
directionality information unit (e.g., 8.times.8) can be merged by
following various scanning orders to form a filtering unit, as
shown in FIG. 13.
[0178] In one embodiment, blocks each with a size of a
directionality information unit (e.g., 8.times.8) and with similar
directionalities can be merged by following various scanning orders
to form a filtering unit, as shown in FIG. 14.
[0179] According to some embodiments, a fixed number of signal
classes can be defined, and blocks of each signal class can have
intra-prediction modes with similar directionalities. A unique set
of restoration filter shapes and sizes can be defined for each of
the signal classes. If non-angular intra modes such as SMOOTH
(including SMOOTH, SMOOTH_H, SMOOTH_V modes), Paeth predictor, or
DC mode are used, each or a combination of the non-angular intra
modes can be associated with its own signal class.
[0180] In one embodiment, each of the 8 nominal angles together
with the associated 7 possible delta-angles in AV1 can be grouped
together to form a total of 8 directionality classes.
[0181] In one embodiment, a directionality class can depend on both
a nominal angle and a delta-angle associated with the nominal
angle.
[0182] According to some embodiments, when a restoration filtering
unit area is not predicted using a directional intra mode or an
intra prediction mode, a signal class of the restoration filtering
unit can be determined based on neighboring blocks of the
restoration filtering unit. A default signal class can be chosen or
an explicit signaling of a signal class index or filter
coefficients can be performed based on whether a neighboring block
is coded in a directional intra mode or directionalities of the
neighboring blocks are inconsistent. For example, if no neighboring
block is coded in a directional intra mode or directionalities of
the neighboring blocks are not consistent, a default signal class
can be chosen or an explicit signaling of a signal class index or
filter coefficients can be performed.
[0183] According to some embodiments, a majority vote or
consistency check of directionalities included in a restoration
filtering unit can be performed to determine a filter class.
[0184] In one embodiment, when a size and a shape of a restoration
filtering unit are fixed and a number of directionality information
units (e.g. 8.times.8 in the case of CDEF) included in the
restoration filtering unit is more than a predefined number, a
majority vote or consistency check of directionalities included in
the restoration filtering unit can be performed to determine a
filter class. The directionality information units can have varied
sizes and different directional intra prediction modes.
[0185] In one embodiment, for the majority vote case, a most
frequent directionality among available and possibly merged
directionalities can be chosen. In an example, a certain margin can
be set between the first and second most frequent
directionalities.
[0186] In one embodiment, before taking the majority vote, it is
determined whether a number of classes inside a restoration
filtering unit is more than a predefined number. If it is true,
inconsistency can be declared and either an explicit signaling for
the restoration filtering unit or use of a smaller restoration
filtering unit can be chosen.
[0187] According to some embodiments, a restoration filtering unit
in a chroma component can use its own directionality information
from directional intra prediction modes when such information is
available separately for chroma components.
[0188] According to aspects of the disclosure, when directionality
information is available from both intra prediction modes and a
CDEF process, directional modes of the intra prediction modes
together with the directionality information from the CDEF process
can be used as a guide to identify and classify a restoration
filtering unit (e.g., Wiener or SGRPRJ filtering unit).
[0189] In one embodiment, when directionality information is
available from both an intra prediction direction and a CDEF
process, a mapping of directionalities from both sources can be
introduced in checking the consistency of the directional
information. A classification-based restoration filtering process
can be performed only when the directional information is
consistent. FIG. 15 shows an example where intra prediction
directions corresponding to 7 delta-angles associated with one of
the 8 base-angles in AV1 can be mapped to a single directionality
class and thus a one-to-one correspondence can be formed with the 8
directionalities derived during the CDEF process.
[0190] In one embodiment, directionality information from one of an
intra prediction mode or a CDEF process can be used as a first
source. The other one of the intra prediction mode or the CDEF
process can be used only when a consistent direction cannot be
determined with the directionality information from the first
source.
[0191] It is noted that Wiener filter and SGRPRJ filter can be
adaptively chosen per LRU in some examples such as AV1.
Determination of the filter parameters and the filtering process
can be performed per LRU for both filters in a same way. The
differences can be the type and the number of the filter
parameters.
[0192] In one embodiment, when an SGRPRJ filter is performed, a
search-window size r and a noise-parameter e can be defined and
used per signal class in the same way as the Wiener filter
parameters.
[0193] In one embodiment, when an SGRPRJ filter is performed,
projection parameters .alpha. and .beta. can be defined and used
per signal class in the same way as the Wiener filter
parameters.
[0194] In one embodiment, when an SGRPRJ filter is performed, a
search-window size r, a noise-parameter e, and projection
parameters .alpha. and .beta. can be defined and used per signal
class in the same way as the Wiener filter parameters.
[0195] V. Flowchart
[0196] FIG. 16 shows a flow chart outlining an exemplary process
(1600) according to an embodiment of the disclosure. In various
embodiments, the process (1600) is executed by processing
circuitry, such as the processing circuitry in the terminal devices
(210), (220), (230) and (240), the processing circuitry that
performs functions of the video encoder (303), the processing
circuitry that performs functions of the video decoder (310), the
processing circuitry that performs functions of the video decoder
(410), the processing circuitry that performs functions of the
intra prediction module (452), the processing circuitry that
performs functions of the video encoder (503), the processing
circuitry that performs functions of the predictor (535), the
processing circuitry that performs functions of the intra encoder
(622), the processing circuitry that performs functions of the
intra decoder (772), and the like. In some embodiments, the process
(1600) is implemented in software instructions, thus when the
processing circuitry executes the software instructions, the
processing circuitry performs the process (1600).
[0197] The process (1600) may generally start at step (S1610),
where the process (1600) determines directionality information of a
restoration filter unit included in a video frame based on at least
one of a CDEF process or an intra prediction mode. Then, the
process (1600) proceeds to step (S1620).
[0198] At step (S1620), the process (1600) determines one of a
plurality of sets of filter parameters of a restoration filter
process based on the directionality information of the restoration
filter unit. Then, the process (1600) proceeds to step (S1630).
[0199] At step (S1630), the process (1600) performs the restoration
filter process on the restoration filter unit based on the one of
the plurality of sets of filter parameters. Then, the process
(1600) proceeds to step (S1640).
[0200] At step (S1640), the process (1600) reconstructs the video
frame based on the filtered restoration filter unit. Then, the
process (1600) terminates.
[0201] In an embodiment, the restoration filter unit includes one
or more directionality information units and the at least one of
the CDEF process or the intra prediction mode is performed on one
of the one or more directionality information units.
[0202] In an embodiment, each of the plurality of sets of filter
parameters of the restoration filter is associated with at least
one directionality of the CDEF process.
[0203] In an embodiment, the process (1600) determines the one of
the plurality of sets of filter parameters of the restoration
filter process based on the directionality information and block
variance information of the restoration filter unit.
[0204] In an embodiment, the process (1600) determines the one of
the plurality of sets of filter parameters of the restoration
filter process based on the directionality information of the
restoration filter unit and filter strengths of the CDEF
process.
[0205] In an embodiment, the process (1600) determines the
directionality information of the restoration filter unit based on
at least one of a majority vote or a consistency check of
directionalities in the restoration filter unit.
[0206] In an embodiment, the process (1600) determines the
directionality information of the restoration filter unit based on
the intra prediction mode which is performed on a neighboring block
of the restoration filter unit based on the restoration filter unit
not being intra coded and the neighboring block being intra
coded.
[0207] In an embodiment, the process (1600) performs the
restoration filter process on the restoration filter unit based on
the directionality information determined from the CDEF process
matching the directionality information determined from the intra
prediction mode.
[0208] In an embodiment, the restoration filter process is one of a
Wiener filter process and an SGRPRJ filter process.
[0209] In an embodiment, the process (1600) determines the one of
the plurality of sets of filter parameters of the restoration
filter process based on one of a default set of filter parameters,
a signaled index indicating a set of filter parameters, or a
signaled set of filter parameters.
[0210] VI. Computer System
[0211] The techniques described above, can be implemented as
computer software using computer-readable instructions and
physically stored in one or more computer-readable media. For
example, FIG. 17 shows a computer system (1700) suitable for
implementing certain embodiments of the disclosed subject
matter.
[0212] The computer software can be coded using any suitable
machine code or computer language, that may be subject to assembly,
compilation, linking, or like mechanisms to create code comprising
instructions that can be executed directly, or through
interpretation, micro-code execution, and the like, by one or more
computer central processing units (CPUs), Graphics Processing Units
(GPUs), and the like.
[0213] The instructions can be executed on various types of
computers or components thereof, including, for example, personal
computers, tablet computers, servers, smartphones, gaming devices,
internet of things devices, and the like.
[0214] The components shown in FIG. 17 for computer system (1700)
are exemplary in nature and are not intended to suggest any
limitation as to the scope of use or functionality of the computer
software implementing embodiments of the present disclosure.
Neither should the configuration of components be interpreted as
having any dependency or requirement relating to any one or
combination of components illustrated in the exemplary embodiment
of a computer system (1700).
[0215] Computer system (1700) may include certain human interface
input devices. Such a human interface input device may be
responsive to input by one or more human users through, for
example, tactile input (such as: keystrokes, swipes, data glove
movements), audio input (such as: voice, clapping), visual input
(such as: gestures), olfactory input (not depicted). The human
interface devices can also be used to capture certain media not
necessarily directly related to conscious input by a human, such as
audio (such as: speech, music, ambient sound), images (such as:
scanned images, photographic images obtain from a still image
camera), video (such as two-dimensional video, three-dimensional
video including stereoscopic video).
[0216] Input human interface devices may include one or more of
(only one of each depicted): keyboard (1701), mouse (1702),
trackpad (1703), touch screen (1710), data-glove (not shown),
joystick (1705), microphone (1706), scanner (1707), and camera
(1708).
[0217] Computer system (1700) may also include certain human
interface output devices. Such human interface output devices may
be stimulating the senses of one or more human users through, for
example, tactile output, sound, light, and smell/taste. Such human
interface output devices may include tactile output devices (for
example tactile feedback by the touch-screen (1710), data-glove
(not shown), or joystick (1705), but there can also be tactile
feedback devices that do not serve as input devices), audio output
devices (such as: speakers (1709), headphones (not depicted)),
visual output devices (such as screens (1710) to include CRT
screens, LCD screens, plasma screens, OLED screens, each with or
without touch-screen input capability, each with or without tactile
feedback capability--some of which may be capable to output two
dimensional visual output or more than three dimensional output
through means such as stereographic output; virtual-reality glasses
(not depicted), holographic displays and smoke tanks (not
depicted)), and printers (not depicted). These visual output
devices (such as screens (1710)) can be connected to a system bus
(1748) through a graphics adapter (1750).
[0218] Computer system (1700) can also include human accessible
storage devices and their associated media such as optical media
including CD/DVD ROM/RW (1720) with CD/DVD or the like media
(1721), thumb-drive (1722), removable hard drive or solid state
drive (1723), legacy magnetic media such as tape and floppy disc
(not depicted), specialized ROM/ASIC/PLD based devices such as
security dongles (not depicted), and the like.
[0219] Those skilled in the art should also understand that term
"computer readable media" as used in connection with the presently
disclosed subject matter does not encompass transmission media,
carrier waves, or other transitory signals.
[0220] Computer system (1700) can also include a network interface
(1754) to one or more communication networks (1755). The one or
more communication networks (1755) can for example be wireless,
wireline, optical. The one or more communication networks (1755)
can further be local, wide-area, metropolitan, vehicular and
industrial, real-time, delay-tolerant, and so on. Examples of the
one or more communication networks (1755) include local area
networks such as Ethernet, wireless LANs, cellular networks to
include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless
wide area digital networks to include cable TV, satellite TV, and
terrestrial broadcast TV, vehicular and industrial to include
CANBus, and so forth. Certain networks commonly require external
network interface adapters that attached to certain general purpose
data ports or peripheral buses (1749) (such as, for example USB
ports of the computer system (1700)); others are commonly
integrated into the core of the computer system (1700) by
attachment to a system bus as described below (for example Ethernet
interface into a PC computer system or cellular network interface
into a smartphone computer system). Using any of these networks,
computer system (1700) can communicate with other entities. Such
communication can be uni-directional, receive only (for example,
broadcast TV), uni-directional send-only (for example CANbus to
certain CANbus devices), or bi-directional, for example to other
computer systems using local or wide area digital networks. Certain
protocols and protocol stacks can be used on each of those networks
and network interfaces as described above.
[0221] Aforementioned human interface devices, human-accessible
storage devices, and network interfaces can be attached to a core
(1740) of the computer system (1700).
[0222] The core (1740) can include one or more Central Processing
Units (CPU) (1741), Graphics Processing Units (GPU) (1742),
specialized programmable processing units in the form of Field
Programmable Gate Areas (FPGA) (1743), hardware accelerators for
certain tasks (1744), graphics adapters (1750), and so forth. These
devices, along with Read-only memory (ROM) (1745), Random-access
memory (1746), internal mass storage (1747) such as internal
non-user accessible hard drives, SSDs, and the like, may be
connected through the system bus (1748). In some computer systems,
the system bus (1748) can be accessible in the form of one or more
physical plugs to enable extensions by additional CPUs, GPU, and
the like. The peripheral devices can be attached either directly to
the core's system bus (1748), or through a peripheral bus (1749).
In an example, the screen (1710) can be connected to the graphics
adapter (1750). Architectures for a peripheral bus include PCI,
USB, and the like.
[0223] CPUs (1741), GPUs (1742), FPGAs (1743), and accelerators
(1744) can execute certain instructions that, in combination, can
make up the aforementioned computer code. That computer code can be
stored in ROM (1745) or RAM (1746). Transitional data can be also
be stored in RAM (1746), whereas permanent data can be stored for
example, in the internal mass storage (1747). Fast storage and
retrieve to any of the memory devices can be enabled through the
use of cache memory, that can be closely associated with one or
more CPU (1741), GPU (1742), mass storage (1747), ROM (1745), RAM
(1746), and the like.
[0224] The computer readable media can have computer code thereon
for performing various computer-implemented operations. The media
and computer code can be those specially designed and constructed
for the purposes of the present disclosure, or they can be of the
kind well known and available to those having skill in the computer
software arts.
[0225] As an example and not by way of limitation, the computer
system having architecture (1700), and specifically the core (1740)
can provide functionality as a result of processor(s) (including
CPUs, GPUs, FPGA, accelerators, and the like) executing software
embodied in one or more tangible, computer-readable media. Such
computer-readable media can be media associated with
user-accessible mass storage as introduced above, as well as
certain storage of the core (1740) that are of non-transitory
nature, such as core-internal mass storage (1747) or ROM (1745).
The software implementing various embodiments of the present
disclosure can be stored in such devices and executed by core
(1740). A computer-readable medium can include one or more memory
devices or chips, according to particular needs. The software can
cause the core (1740) and specifically the processors therein
(including CPU, GPU, FPGA, and the like) to execute particular
processes or particular parts of particular processes described
herein, including defining data structures stored in RAM (1746) and
modifying such data structures according to the processes defined
by the software. In addition or as an alternative, the computer
system can provide functionality as a result of logic hardwired or
otherwise embodied in a circuit (for example: accelerator (1744)),
which can operate in place of or together with software to execute
particular processes or particular parts of particular processes
described herein. Reference to software can encompass logic, and
vice versa, where appropriate. Reference to a computer-readable
media can encompass a circuit (such as an integrated circuit (IC))
storing software for execution, a circuit embodying logic for
execution, or both, where appropriate. The present disclosure
encompasses any suitable combination of hardware and software.
[0226] While this disclosure has described several exemplary
embodiments, there are alterations, permutations, and various
substitute equivalents, which fall within the scope of the
disclosure. It will thus be appreciated that those skilled in the
art will be able to devise numerous systems and methods which,
although not explicitly shown or described herein, embody the
principles of the disclosure and are thus within the spirit and
scope thereof.
APPENDIX A: ACRONYMS
ALF: Adaptive Loop Filter
AMVP: Advanced Motion Vector Prediction
APS: Adaptation Parameter Set
ASIC: Application-Specific Integrated Circuit
ATMVP: Alternative/Advanced Temporal Motion Vector Prediction
AV1: AOMedia Video 1
AV2: AOMedia Video 2
BMS: Benchmark Set
BV: Block Vector
CANBus: Controller Area Network Bus
CB: Coding Block
CC-ALF: Cross-Component Adaptive Loop Filter
CD: Compact Disc
CDEF: Constrained Directional Enhancement Filter
CPR: Current Picture Referencing
CPU: Central Processing Unit
CRT: Cathode Ray Tube
CTB: Coding Tree Block
CTU: Coding Tree Unit
CU: Coding Unit
DPB: Decoder Picture Buffer
DPCM: Differential Pulse-Code Modulation
DPS: Decoding Parameter Set
DVD: Digital Video Disc
FPGA: Field Programmable Gate Area
JCCR: Joint CbCr Residual Coding
JVET: Joint Video Exploration Team
GOP: Groups of Pictures
GPU: Graphics Processing Unit
[0227] GSM: Global System for Mobile communications
HDR: High Dynamic Range
HEVC: High Efficiency Video Coding
HRD: Hypothetical Reference Decoder
IBC: Intra Block Copy
IC: Integrated Circuit
ISP: Intra Sub-Partitions
JEM: Joint Exploration Model
LAN: Local Area Network
LCD: Liquid-Crystal Display
LR: Loop Restoration Filter
LRU: Loop Restoration Unit
LTE: Long-Term Evolution
MPM: Most Probable Mode
MV: Motion Vector
OLED: Organic Light-Emitting Diode
PBs: Prediction Blocks
PCI: Peripheral Component Interconnect
PDPC: Position Dependent Prediction Combination
PLD: Programmable Logic Device
PPS: Picture Parameter Set
PU: Prediction Unit
RAM: Random Access Memory
ROM: Read-Only Memory
SAO: Sample Adaptive Offset
SCC: Screen Content Coding
SDR: Standard Dynamic Range
SEI: Supplementary Enhancement Information
SNR: Signal Noise Ratio
SPS: Sequence Parameter Set
SSD: Solid-state Drive
TU: Transform Unit
USB: Universal Serial Bus
VPS: Video Parameter Set
VUI: Video Usability Information
VVC: Versatile Video Coding
WAIP: Wide-Angle Intra Prediction
* * * * *